Kafka configure jaas using sasl.jaas.config on kubernetes - kubernetes

I'm using this helm chart: https://github.com/helm/charts/tree/master/incubator/kafka
and these overrides in values.yaml
configurationOverrides:
advertised.listeners: |-
EXTERNAL://kafka-${KAFKA_BROKER_ID}.host-removed:$((31090 + ${KAFKA_BROKER_ID}))
listener.security.protocol.map: |-
PLAINTEXT:SASL_PLAINTEXT,EXTERNAL:SASL_PLAINTEXT
sasl.enabled.mechanisms: SCRAM-SHA-256
auto.create.topics.enable: false
inter.broker.listener.name: PLAINTEXT
sasl.mechanism.inter.broker.protocol: SCRAM-SHA-256
listener.name.EXTERNAL.scram-sha-256.sasl.jaas.config: org.apache.kafka.common.security.scram.ScramLoginModule required username="user" password="password";
based on this documentation: https://kafka.apache.org/documentation/#security_jaas_broker
(quick summary)
Brokers may also configure JAAS using the broker configuration property sasl.jaas.config. The property name must be prefixed with the listener prefix including the SASL mechanism, i.e. listener.name.{listenerName}.{saslMechanism}.sasl.jaas.config. Only one login module may be specified in the config value. If multiple mechanisms are configured on a listener, configs must be provided for each mechanism using the listener and mechanism prefix
listener.name.sasl_ssl.scram-sha-256.sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required \
username="admin" \
password="admin-secret";
The problem is that when I start Kafka I get the following error:
java.lang.IllegalArgumentException: Could not find a 'KafkaServer' or 'plaintext.KafkaServer' entry in the JAAS configuration. System property 'java.security.auth.login.config' is not set
According to the order of precedence, it should use the static jass file if the above config is NOT set.
If JAAS configuration is defined at different levels, the order of precedence used is:
Broker configuration property listener.name.{listenerName}.{saslMechanism}.sasl.jaas.config
{listenerName}.KafkaServer section of static JAAS configuration
KafkaServer section of static JAAS configuration
The helm chart doesn't support a way to configure this jaas file so using this property seems to be the desired way, I'm just confused as to what is configured incorrectly.
Note: The cluster works fine if I disable all SASL and just use plain text but that's not much good in a real environment.

We've defined 2 listeners: PLAINTEXT and EXTERNAL. You've mapped both to SASL_PLAINTEXT.
Is this really what you wanted to do? or did you want PLAINTEXT to not require SASL but just be Plaintext?
If you really want both to be SASL, then both of them need a JAAS configuration. In your question, I only see a JAAS configuration for EXTERNAL:
listener.name.EXTERNAL.scram-sha-256.sasl.jaas.config: org.apache.kafka.common.security.scram.ScramLoginModule required username="user" password="password";
As you've mapped PLAINTEXT to SASL_PLAINTEXT, it also requires a JAAS configuration. You can specify it using for example:
listener.name.PLAINTEXT.scram-sha-256.sasl.jaas.config: org.apache.kafka.common.security.scram.ScramLoginModule required username="user" password="password";
If you wanted your PLAINTEXT listener to actually be Plaintext without SASL, then you need to update the listener mapping:
listener.security.protocol.map: |-
PLAINTEXT:PLAINTEXT,EXTERNAL:SASL_PLAINTEXT

Related

value.subject.name.strategy for kafka s3 sink connector is not recognized

How do we configure value.subject.name.strategy based on https://docs.confluent.io/platform/current/schema-registry/connect.html#json-schema
I put various configuration names in worker.properties but it seems that nothing is recognized by kafka sink connector. As you can see in the logs, it's always defaulted to topicNameStrategy.
[2022-11-21 16:40:23,663] WARN The configuration 'value.converter.subject.name.strategy' was supplied but isn't a known config. (org.apache.kafka.clients.consumer.ConsumerConfig:355)
value.subject.name.strategy = class io.confluent.kafka.serializers.subject.TopicNameStrategy
[2022-11-21 16:40:23,690] WARN The configuration 'converter.subject.name.strategy' was supplied but isn't a known config. (org.apache.kafka.clients.consumer.ConsumerConfig:355)
[2022-11-21 16:40:23,690] WARN The configuration 'value.subject.name.strategy' was supplied but isn't a known config. (org.apache.kafka.clients.consumer.ConsumerConfig:355)
[2022-11-21 16:40:23,690] WARN The configuration 'value.converter.subject.name.strategy' was supplied but isn't a known config. (org.apache.kafka.clients.consumer.ConsumerConfig:355)
[2022-11-21 16:40:23,719] WARN The configuration 'converter.subject.name.strategy' was supplied but isn't a known config. (org.apache.kafka.clients.consumer.ConsumerConfig:355)
I put all of these variations in worker.properties and feed it to connector_distributed to start.
grep -i "name.strategy" /plugins/worker.properties
value.subject.name.strategy=io.confluent.kafka.serializers.subject.RecordNameStrategy
value.converter.subject.name.strategy=io.confluent.kafka.serializers.subject.RecordNameStrategy
consumer.value.subject.name.strategy=io.confluent.kafka.serializers.subject.RecordNameStrategy
consumer.value.converter.subject.name.strategy=io.confluent.kafka.serializers.subject.RecordNameStrategy
Those logs can be ignored. Consumer properties don't use those, only the config within the serializer does. That will be printed separately (where you may be seeing the default applied).
There's an open JIRA to silence the logs from passing converter properties all over the consumer.
To configure the serializer, you use converters. To configure converters you need to use
value.converter.[property]=[value]
So, like schema.registry.url,
value.converter.value.subject.name.strategy=OtherStrategy

How to configure Kafka for Clickhouse whithout an error being thrown?

I have two Kafka clusters, the first — Kafka-A — uses a "SASL SCRAM-SHA-256" mechanism to authenticate,the other — Kafka-B — has no configuration set for it.
To be able to connect to Kafka-A in Clickhouse, I configured a config.xml file as demonstrated bellow:
My config.xml configuration:
<kafka>
<security_protocol>sasl_plaintext</security_protocol>
<sasl_mechanism>SCRAM-SHA-256</sasl_mechanism>
<sasl_username>xxx</sasl_username>
<sasl_password>xxx</sasl_password>
<debug>all</debug>
<auto_offset_reset>latest</auto_offset_reset>
<compression_type>snappy</compression_type>
</kafka>
At this point I found that I can't connect to Kafka-B using the Kafka engine table. When I try to an error occurs that prints the following message:
StorageKafka (xxx): [rdk:FAIL]
[thrd:sasl_plaintext://xxx/bootstrap]:
sasl_plaintext://xxx/bootstrap: SASL SCRAM-SHA-256
mechanism handshake failed: Broker: Request not valid in current SASL
state: broker's supported mechanisms: (after 3ms in state
AUTH_HANDSHAKE, 4 identical error(s) suppressed)
It seems when connecting to Kafka-B, Clickhouse also use the SASL authentication, which leads to the error being thrown, since Kafka-B servers are not configured with authentication.
I would like to know how I can configure it correctly to connect to different Kafka clusters?
CH allows to define kafka config for each topic
Use topic in the name of an XML section:
<kafka_mytopic>
<security_protocol>....
....
</kafka_mytopic>

Could not find a 'KafkaClient' entry in the JAAS configuration. System property 'java.security.auth.login.config' is not set from Kafka rest proxy

I am trying to use kafka rest proxy for AWS MSK cluster.
MSK Encryption details:
Within the cluster
TLS encryption: Enabled
Between clients and brokers
TLS encryption: Enabled
Plaintext: Not enabled
I have created topic "TestTopic" on MSK and then I have created another EC2 instance in the same VPC as MSK to work as Rest proxy. Here are details from kafka-rest.properties:
zookeeper.connect=z-3.msk.xxxx.xx.xxxxxx-1.amazonaws.com:2181,z-1.msk.xxxx.xx.xxxxxx-1.amazonaws.com:2181
bootstrap.servers=b-1.msk.xxxx.xx.xxxxxx-1.amazonaws.com:9096,b-2.msk.xxxx.xx.xxxxxx-1.amazonaws.com:9096
sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required username="username" password="password";
security.protocol=SASL_SSL
sasl.mechanism=SCRAM-SHA-512
ssl.truststore.location=/tmp/kafka.client.truststore.jks
I have also created rest-jaas.properties file with below content:
KafkaClient {
org.apache.kafka.common.security.scram.ScramLoginModule required
username="username"
password="password";
};
and then set the java.security.auth.login.config using:
export KAFKA_OPTS=-Djava.security.auth.login.config=/home/ec2-user/confluent-6.1.1/rest-jaas.properties
After this I started Kafka rest proxy using:
./kafka-rest-start /home/ec2-user/confluent-6.1.1/etc/kafka-rest/kafka-rest.properties
But when I tried to put an event on the TestTopic by calling service from postman:
POST: http://IP_of_ec2instance:8082/topics/TestTopic
I am getting 500 error. But in the EC2 instance I can see error:
Caused by: org.apache.kafka.common.KafkaException: Failed to construct kafka producer
at org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:441)
at org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:291)
at io.confluent.kafkarest.ProducerPool.buildNoSchemaProducer(ProducerPool.java:120)
at io.confluent.kafkarest.ProducerPool.buildBinaryProducer(ProducerPool.java:106)
at io.confluent.kafkarest.ProducerPool.<init>(ProducerPool.java:71)
at io.confluent.kafkarest.ProducerPool.<init>(ProducerPool.java:60)
at io.confluent.kafkarest.ProducerPool.<init>(ProducerPool.java:53)
at io.confluent.kafkarest.DefaultKafkaRestContext.getProducerPool(DefaultKafkaRestContext.java:54)
... 64 more
Caused by: java.lang.IllegalArgumentException: Could not find a 'KafkaClient' entry in the JAAS configuration. System property 'java.security.auth.login.config' is not set
at org.apache.kafka.common.security.JaasContext.defaultContext(JaasContext.java:141)
at org.apache.kafka.common.security.JaasContext.load(JaasContext.java:106)
at org.apache.kafka.common.security.JaasContext.loadClientContext(JaasContext.java:92)
at org.apache.kafka.common.network.ChannelBuilders.create(ChannelBuilders.java:139)
at org.apache.kafka.common.network.ChannelBuilders.clientChannelBuilder(ChannelBuilders.java:74)
at org.apache.kafka.clients.ClientUtils.createChannelBuilder(ClientUtils.java:120)
at org.apache.kafka.clients.producer.KafkaProducer.newSender(KafkaProducer.java:449)
at org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:430)
... 71 more
I can also see that value of sasl.jaas.config = null in the ProducerConfig values.
Could someone please help me with this. Thanks in advance!
Finally the issue was fixed. I am updating the fix here so that it can be beneficial for someone:
kafka-rest.properties file should have below text:
zookeeper.connect=z-3.msk.xxxx.xx.xxxxxx-1.amazonaws.com:2181,z-1.msk.xxxx.xx.xxxxxx-1.amazonaws.com:2181
bootstrap.servers=b-1.msk.xxxx.xx.xxxxxx-1.amazonaws.com:9096,b-2.msk.xxxx.xx.xxxxxx-1.amazonaws.com:9096
client.sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required username="username" password="username";
client.security.protocol=SASL_SSL
client.sasl.mechanism=SCRAM-SHA-512
Neither there was a need to create file rest-jaas.properties nor export KAFKA_OPTS was needed.
After these changes, I was able to put the messages in the kafka topic using scram authentication.

No JAAS configuration section named 'Server' was foundin '/kafka/kafka_2.12-2.3.0/config/zookeeper_jaas.conf'

when i run the zookeeper from the package in the kakfa_2.12-2.3.0 i am getting the following error
$ export KAFKA_OPTS="-Djava.security.auth.login.config=/kafka/kafka_2.12-2.3.0/config/zookeeper_jaas.conf"
$ ./bin/zookeeper-server-start.sh config/zookeeper.properties
and the zookeeper_jaas.conf is
KafkaServer {
org.apache.kafka.common.security.plain.PlainLoginModule required
username="admin"
password="admin-secret"
user_admin="admin-secret";
};
and the zookeeper.properties file is
server=localhost:9092
#server=localhost:2888:3888
sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username="ibm" password="ibm-secret";
security.protocol=SASL_SSL
sasl.mechanism=PLAIN
ssl.truststore.location=**strong text**/kafka/apache-zookeeper-3.5.5-bin/zookeeperkeys/client.truststore.jks
ssl.truststore.password=test1234
authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenticationProvider
jaasLoginRenew=3600000
requireClientAuthScheme=sasl
can anyone suggest what could be the reason
You seem to have mixed up a bunch of Kafka SASL configuration into your Zookeeper configuration files. Both Zookeeper and Kafka have different SASL support so it's not going to work.
I'm guessing you want to enable SASL authentication between Kafka and Zookeeper. In that case you need to follow the Zookeeper Server-Client guide: https://cwiki.apache.org/confluence/display/ZOOKEEPER/Client-Server+mutual+authentication
Zookeeper does not support SASL Plain, but DigestMD5 is pretty similar. In that case your jaas.conf file should look like:
Server {
org.apache.zookeeper.server.auth.DigestLoginModule required
user_super="adminsecret"
user_bob="bobsecret";
};
Then you need to configure your Kafka brokers to connect to Zookeeper with SASL. You can do that using another jaas.conf file (this time loading it in Kafka):
Client {
org.apache.zookeeper.server.auth.DigestLoginModule required
username="bob"
password="bobsecret";
};
Note: you can also enable SASL between the Zookeeper servers. To do so, follow the Server-Server guide: https://cwiki.apache.org/confluence/display/ZOOKEEPER/Server-Server+mutual+authentication

"The configuration foo.bar was supplied but isn't a known config"

When I'm starting a connector in distributed mode (connect-runtime v1.0.0), there are several configuration values that are mandatory. I'm speaking of values like:
offset.storage.topic
offset.storage.partitions
key.converter
config.storage.topic
config.storage.replication.factor
rest.port
status.storage.topic
key.converter.schemas.enable
value.converter.schemas.enable
internal.value.converter
internal.key.converter
internal.key.converter.schemas.enable
internal.value.converter.schemas.enable
status.storage.partitions
status.storage.topic
value.converter
offset.flush.interval.ms
offset.storage.replication.factor
...
Once the connector is started with meaningful values for those properties, it works as expected. But at startup, the log get's flooded with entries like
WARN o.a.k.c.admin.AdminClientConfig.logUnused - The configuration 'offset.storage.topic' was supplied but isn't a known config.
for all above mentioned, mandatory configuration values.
There are three config classes which are logging these warnings:
org.apache.kafka.clients.consumer.ConsumerConfig
org.apache.kafka.clients.admin.AdminClientConfig
org.apache.kafka.clients.producer.ProducerConfig
Since now I haven't found a reason for this behavior. What's missing here or what is wrong, that causes this warnings? Do I have to worry about this warnings?
There's a ticket on this issue, still open as of Nov'19:
https://issues.apache.org/jira/browse/KAFKA-7509
When running Connect, the logs contain quite a few warnings about "The configuration '{}' was supplied but isn't a known config." This occurs when Connect creates producers, consumers, and admin clients, because the AbstractConfig is logging unused configuration properties upon construction. It's complicated by the fact that the Producer, Consumer, and AdminClient all create their own AbstractConfig instances within the constructor, so we can't even call its ignore(String key) method.
And similar issue exists for KafkaStreams:
https://issues.apache.org/jira/browse/KAFKA-6793
Judging by this thread, it doesn't seem to matter