No JAAS configuration section named 'Client' was found in specified JAAS configuration file - apache-kafka

Can't deploy Kafka with SASL authentication
Here is my docker-compose.yml
version: '3.1'
services:
zookeeper:
image: confluentinc/cp-zookeeper
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
ZOOKEEPER_INIT_LIMIT: 5
ZOOKEEPER_SYNC_LIMIT: 2
KAFKA_OPTS:
-Djava.security.auth.login.config=/home/etozhekim/IdeaProjects/veles-core/zookeeper_server_jaas.conf
-Dquorum.auth.enableSasl=true
-Dquorum.cnxn.threads.size=20
-Dzookeeper.authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenticationProvider
-DjaasLoginRenew=3600000
-DrequireClientAuthScheme=sasl
volumes:
- /home/etozhekim/IdeaProjects/veles-core/zookeeper_server_jaas.conf
networks:
- kafka-cluster-network
kafka:
image: confluentinc/cp-kafka
ports:
- "9092:9092"
depends_on:
- zookeeper
environment:
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_SASL_ENABLED_MECHANISMS: PLAIN
KAFKA_SASL_MECHANISM_INTER_BROKER_PROTOCOL: PLAIN
KAFKA_SECURITY_INTER_BROKER_PROTOCOL: SASL_PLAINTEXT
KAFKA_LISTENERS: SASL_PLAINTEXT://kafka:9092,SASL_PLAINTEXT://kafka:9092
KAFKA_ADVERTISED_LISTENERS: SASL_PLAINTEXT://localhost:9092,SASL_PLAINTEXT://localhost:9092
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
KAFKA_OPTS:
-Dzookeeper.sasl.client=true
-Djava.security.auth.login.config=/home/etozhekim/IdeaProjects/veles-core/kafka_server_jaas.conf
volumes:
- /home/etozhekim/IdeaProjects/veles-core/kafka_server_jaas.conf
Zookeeper is deployed without problems. But Kafka logs:
[2023-02-02 11:49:24,708] WARN SASL configuration failed. Will continue connection to Zookeeper server without SASL authentication, if Zookeeper server allows it. (org.apache.zookeeper.ClientCnxn)
javax.security.auth.login.LoginException: No JAAS configuration section named 'Client' was found in specified JAAS configuration file: '/home/etozhekim/IdeaProjects/veles-core/kafka_server_jaas.conf'
kafka_server_jaas.conf
KafkaServer {
org.apache.kafka.common.security.scram.ScramLoginModule required
username="kafkabroker"
password="password";
};
Client {
org.apache.kafka.common.security.plain.PlainLoginModule required
username="admin"
password="password";
};
zookeper_server_jaas.conf
Server {
org.apache.zookeeper.server.auth.DigestLoginModule required
user_admin="password";
};

The error is saying it wants a Zookeeper Client JAAS setting whereas you've only configured a Kafka Client config in the broker.
Also, /home/etozhekim doesn't exist in Confluent images.
Add a valid right-hand side container mapping to the volume, such as :/tmp/jaas.conf
Then use -Djava.security.auth.login.config=/tmp/jaas.conf

Related

ksqldb not failing over to standby schema registry

I am trying to test failover scenario for kafka schema registry.
I spanned up two Schema registry docker containers(Primary and standby) and I have a KSQLDB server running in a docker container pointing to primary schema registry. The source kafka connecter is streaming the data from the database to kafka topics. The ksqlDB server is able to validate the schema of the kafka message using primary schema registry. Now I shutdown the primary schema registry. The ksqldb server is not failing over to the stand by schema registry to validate the schema, causing ksqldb server not receiving the data from kafka topics.
How should ksqldb server should know what is the standby schema-registry that it need to connect to when primary is down.
Below is docker-compose.yml file that I have used
schema-registry:
image: confluentinc/cp-schema-registry:${CP_VERSION}
depends_on:
- zookeeper
- kafka
ports:
- "8081:8081"
container_name: schema-registry
environment:
SCHEMA_REGISTRY_HOST_NAME: schema-registry
SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS: PLAINTEXT://kafka:9092
SCHEMA_REGISTRY_ACCESS_CONTROL_ALLOW_ORIGIN: '*'
SCHEMA_REGISTRY_ACCESS_CONTROL_ALLOW_METHODS: 'GET,POST,PUT,OPTIONS'
SCHEMA_REGISTRY_LEADER_ELIGIBILITY : "true"
SCHEMA_REGISTRY_GROUP_ID : "schema-registry-group"
SCHEMA_REGISTRY_LISTENERS: http://0.0.0.0:8081
schema-registry-2:
image: confluentinc/cp-schema-registry:${CP_VERSION}
depends_on:
- kafka
- schema-registry
ports:
- "8082:8082"
container_name: schema-registry-2
environment:
SCHEMA_REGISTRY_HOST_NAME: schema-registry-2
SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS: PLAINTEXT://kafka:9092
SCHEMA_REGISTRY_ACCESS_CONTROL_ALLOW_ORIGIN: '*'
SCHEMA_REGISTRY_ACCESS_CONTROL_ALLOW_METHODS: 'GET,POST,PUT,OPTIONS'
SCHEMA_REGISTRY_LEADER_ELIGIBILITY : "true"
SCHEMA_REGISTRY_GROUP_ID : "schema-registry-group"
SCHEMA_REGISTRY_LISTENERS: http://0.0.0.0:8082
primary-ksqldb-server:
image: ${KSQL_IMAGE_BASE}confluentinc/ksqldb-server:${KSQL_VERSION}
hostname: primary-ksqldb-server
container_name: primary-ksqldb-server
depends_on:
- kafka
- schema-registry
ports:
- "8088:8088"
environment:
KSQL_CONFIG_DIR: "/etc/ksql"
KSQL_LISTENERS: http://0.0.0.0:8088
KSQL_BOOTSTRAP_SERVERS: kafka:9092
KSQL_KSQL_ADVERTISED_LISTENER : http://localhost:8088
KSQL_KSQL_SCHEMA_REGISTRY_URL: http://schema-registry:8081
KSQL_KSQL_LOGGING_PROCESSING_STREAM_AUTO_CREATE: "true"
KSQL_KSQL_LOGGING_PROCESSING_TOPIC_AUTO_CREATE: "true"
KSQL_KSQL_EXTENSION_DIR: "/usr/ksqldb/ext/"
KSQL_KSQL_SERVICE_ID: "nrt_"
KSQL_KSQL_STREAMS_NUM_STANDBY_REPLICAS: 1
KSQL_KSQL_QUERY_PULL_ENABLE_STANDBY_READS: "true"
KSQL_KSQL_HEARTBEAT_ENABLE: "true"
KSQL_KSQL_LAG_REPORTING_ENABLE : "true"
KSQL_KSQL_QUERY_PULL_MAX_ALLOWED_OFFSET_LAG : 100
KSQL_LOG4J_APPENDER_KAFKA_APPENDER: "org.apache.kafka.log4jappender.KafkaLog4jAppender"
KSQL_LOG4J_APPENDER_KAFKA_APPENDER_LAYOUT: "io.confluent.common.logging.log4j.StructuredJsonLayout"
KSQL_LOG4J_APPENDER_KAFKA_APPENDER_BROKERLIST: localhost:9092
KSQL_LOG4J_APPENDER_KAFKA_APPENDER_TOPIC: KSQL_LOG
KSQL_LOG4J_LOGGER_IO_CONFLUENT_KSQL: INFO,kafka_appender
KSQL_KSQL_QUERY_PULL_METRICS_ENABLED: "true"
KSQL_JMX_OPTS: >
-Djava.rmi.server.hostname=localhost
-Dcom.sun.management.jmxremote
-Dcom.sun.management.jmxremote.port=1099
-Dcom.sun.management.jmxremote.authenticate=false
-Dcom.sun.management.jmxremote.ssl=false
-Dcom.sun.management.jmxremote.rmi.port=1099
When I stop primary schema registry, ksqldb is supposed to connect to standy schema registry
How would it know the other is available if you don't provide it?
KSQL_KSQL_SCHEMA_REGISTRY_URL: http://schema-registry:8081,http://schema-registry-2:8082
In other words, you shut down schema-registry container, so it will simply not respond. It will not forward requests or update the clients to talk to another server... So, you need to provide a URL-list, or you need to setup an external reverse proxy to round-robin the requests to the active instance.

Zookeeper: cnxn.saslServer is null and Kafka:the quorum member's saslToken is null

To provide access to only kafka for creating, deleting topics, I am creating a plaintext SASL security between kafka and zookeeper. I get the following error and can not figure out why.
zookeeper_1 | 2020-07-20 10:19:06,907 [myid:] - ERROR [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:ZooKeeperServer#1063] - cnxn.saslServer is null: cnxn object did not initialize its saslServer properly.
kafka_1 | [2020-07-20 10:19:06,909] ERROR SASL authentication failed using login context 'Client' with exception: {} (org.apache.zookeeper.client.ZooKeeperSaslClient)
kafka_1 | javax.security.sasl.SaslException: Error in authenticating with a Zookeeper
Quorum member: the quorum member's saslToken is null.
kafka_1 | at org.apache.zookeeper.client.ZooKeeperSaslClient.createSaslToken(ZooKeeperSaslClient.java:279)
kafka_1 | at org.apache.zookeeper.client.ZooKeeperSaslClient.respondToServer(ZooKeeperSaslClient.java:242)
kafka_1 | at org.apache.zookeeper.ClientCnxn$SendThread.readResponse(ClientCnxn.java:805)
kafka_1 | at org.apache.zookeeper.ClientCnxnSocketNIO.doIO(ClientCnxnSocketNIO.java:94)
kafka_1 | at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:366)
kafka_1 | at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1141)
kafka_1 | [2020-07-20 10:19:06,912] ERROR [ZooKeeperClient Kafka server] Auth failed. (kafka.zookeeper.ZooKeeperClient)
docker-compose-sasl-plaintext.yml
version: '3'
services:
zookeeper:
image: wurstmeister/zookeeper
ports:
- "2181:2181"
environment:
KAFKA_OPTS: '-Djava.security.auth.login.config=/opt/kafka/config/kafka_server_jaas.conf -Dzookeeper.authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenticationProvider'
volumes:
- ./sasl-plaintext/kafka_server_jaas.conf:/opt/kafka/config/kafka_server_jaas.conf
kafka:
build: .
ports:
- "9092:9092"
environment:
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_LISTENERS: "SASL_PLAINTEXT://kafka:9092"
KAFKA_ADVERTISED_LISTENERS: "SASL_PLAINTEXT://kafka:9092"
KAFKA_SECURITY_INTER_BROKER_PROTOCOL: "SASL_PLAINTEXT"
KAFKA_SASL_MECHANISM_INTER_BROKER_PROTOCOL: "PLAIN"
KAFKA_SASL_ENABLED_MECHANISMS: "PLAIN"
KAFKA_OPTS: "-Djava.security.auth.login.config=/opt/kafka/config/kafka_server_jaas.conf"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- ./sasl-plaintext/kafka_server_jaas.conf:/opt/kafka/config/kafka_server_jaas.conf
kafka_server_jaas.conf
KafkaServer {
org.apache.kafka.common.security.plain.PlainLoginModule required
username="wl"
password="wl-secret"
user_wl="wl-secret";
};
Server {
org.apache.kafka.common.security.plain.PlainLoginModule required
username="wl"
password="wl-secret"
user_wl="wl-secret";
};
Client {
org.apache.kafka.common.security.plain.PlainLoginModule required
username="wl"
password="wl-secret";
};
your kafka_server_jaas.conf configured 'Client' option, it is used for kafka connect to zookeeper with sasl .
but your zookeeper did't config server user & client user .

Getting Secure Kafka and Schema registry to talk to each other, as well as spring boot configuration for it

Hey I have been trying to solve this for about two weeks now.
Basically I want kafka to be on SSL and schema registry on HTTPS. No kerberos to be used.
I have a two spring services, one is a producer and the other a consumer (avro)
this is my current docker-compose, with it when I send a request to my producer it doesn't throw any errors in the application, the request times out but Kafka logs show kafka_1 | [2019-12-03 09:53:27,454] INFO [SocketServer brokerId=1] Failed authentication with /172.18.0.1 (SSL handshake failed) (org.apache.kafka.common.network.Selector)
when I uncomment the lines in the docker-compose i get PKIX path building failed and some other error specifying that Avro cannot be serialized or something like that
zookeeper:
image: confluentinc/cp-zookeeper:5.3.0
ports:
- 2181:2181
environment:
ZOOKEEPER_CLIENT_PORT: "2181"
ZOOKEEPER_TICK_TIME: "2000"
kafka:
image: confluentinc/cp-kafka:5.3.0
ports:
- 29094:29094
environment:
KAFKA_BROKER_ID: 1
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,SSL:SSL
KAFKA_SECURITY_PROTOCOL: SSL
KAFKA_INTER_BROKER_PROTOCOL: SSL
KAFKA_INTER_BROKER_LISTENER_NAME: SSL
KAFKA_LISTENERS: SSL://kafka:29094,PLAINTEXT://kafka:9092
KAFKA_ADVERTISED_LISTENERS: SSL://kafka:29094,PLAINTEXT://kafka:9092
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_AUTO_CREATE_TOPICS_ENABLE: "true"
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
# KAFKA_SSL_CLIENT_AUTH: required
KAFKA_SSL_KEYSTORE_FILENAME: kafka.server.keystore.jks
KAFKA_SSL_TRUSTSTORE_FILENAME: kafka.server.truststore.jks
KAFKA_SSL_KEY_CREDENTIALS: key_credential
KAFKA_SSL_KEYSTORE_CREDENTIALS: key_credential
KAFKA_SSL_KEYSTORE_LOCATION: /etc/kafka/secrets/kafka.server.keystore.jks
KAFKA_SSL_KEYSTORE_PASSWORD: PASSWORD
KAFKA_SSL_KEY_PASSWORD: PASSWORD
KAFKA_SSL_TRUSTSTORE_LOCATION: /etc/kafka/secrets/kafka.server.truststore.jks
KAFKA_SSL_TRUSTSTORE_PASSWORD: PASSWORD
KAFKA_SSL_TRUSTSTORE_CREDENTIALS: key_credential
KAFKA_HEAP_OPTS: -Xmx456M
KAFKA_SSL_ENDPOINT_IDENTIFICATION_ALGORITHM: ""
# KAFKA_AUTHORIZER_CLASS_NAME: kafka.security.auth.SimpleAclAuthorizer
KAFKA_ALLOW_EVERYONE_IF_NO_ACL_FOUND: "true"
KAFKA_SUPER_USERS: User:CN=Kafka-domain
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /home/xubu/Documents:/etc/kafka/secrets
schema-registry:
image: confluentinc/cp-schema-registry:5.3.0
depends_on:
- zookeeper
- kafka
ports:
- 8181:8181
- 8085:8085
- 8086:8086
environment:
SCHEMA_REGISTRY_HOST_NAME: schema-registry
SCHEMA_REGISTRY_LISTENERS: http://schema-registry:8085, https://schema-registry:8086
SCHEMA_REGISTRY_KAFKASTORE_CONNECTION_URL: zookeeper:2181
SCHEMA_REGISTRY_KAFKASTORE_SECURITY_PROTOCOL: SSL
SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS: SSL://kafka:29094
SCHEMA_REGISTRY_KAFKASTORE_SSL_TRUSTSTORE_LOCATION: /etc/kafka/client/kafka.client.truststore.jks
SCHEMA_REGISTRY_KAFKASTORE_SSL_TRUSTSTORE_PASSWORD: PASSWORD
SCHEMA_REGISTRY_KAFKASTORE_SSL_KEYSTORE_LOCATION: /etc/kafka/client/kafka.client.keystore.jks
SCHEMA_REGISTRY_KAFKASTORE_SSL_KEYSTORE_PASSWORD: PASSWORD
SCHEMA_REGISTRY_KAFKASTORE_SSL_KEY_PASSWORD: PASSWORD
SCHEMA_REGISTRY_KAFKASTORE_SSL_ENDPOINT_IDENTIFICATION_ALGORITHM: ""
SCHEMA_REGISTRY_SSL_TRUSTSTORE_LOCATION: /etc/kafka/client/kafka.client.truststore.jks
SCHEMA_REGISTRY_SSL_TRUSTSTORE_PASSWORD: PASSWORD
SCHEMA_REGISTRY_SSL_KEYSTORE_LOCATION: /etc/kafka/client/kafka.client.keystore.jks
SCHEMA_REGISTRY_SSL_KEYSTORE_PASSWORD: PASSWORD
SCHEMA_REGISTRY_SSL_KEY_PASSWORD: PASSWORD
SCHEMA_REGISTRY_SSL_ENDPOINT_IDENTIFICATION_ALGORITHM: ""
SCHEMA_REGISTRY_SECURITY_PROTOCOL: SSL
SCHEMA_REGISTRY_INTER_INSTANCE_PROTOCOL: "https"
#SCHEMA_REGISTRY_SSL_CLIENT_AUTH: 'true'
volumes:
- /home/xubu/Documents:/etc/kafka/client
- /home/xubu/Documents:/etc/kafka/consumer
Below is part of my spring boot application.yaml
spring:
kafka:
bootstrap-servers: kafka:29094
producer:
key-serializer: org.apache.kafka.common.serialization.StringSerializer
value-serializer: io.confluent.kafka.serializers.KafkaAvroSerializer
ssl:
key-store-location: /home/xubu/Documents/kafka.client.keystore.jks
key-password: PASSWORD
key-store-password: PASSWORD
trust-store-location: /home/xubu/Documents/kafka.client.truststore.jks
trust-store-password: PASSWORD
protocol: SSL
properties:
value:
subject:
name:
strategy: io.confluent.kafka.serializers.subject.RecordNameStrategy
value-serializer: io.confluent.kafka.serializers.KafkaAvroSerializer
ssl.endpoint.identification.algorithm: https
schema.registry.url: https://schema-registry:8086
ssl:
trust-store-location: /home/xubu/Documents/kafka.client.truststore.jks
trust-store-password: PASSWORD
key-store-location: /home/xubu/Documents/kafka.client.keystore.jks
key-store-password: PASSWORD
key-password: PASSWORD
protocol: SSL
key-store-type: jks
trust-store-type: jks
This has been my pain for the last two or so weeks, and it's an intro into trying out Access Control Lists on schema-registry

KafkaProducer cannot be created due to missing java.security.auth.login.config

I am attempting to create a KafkaProducer using the akka-stream-kafka library.
INFRASTRUCTURE
Uses docker-compose, showing the kafka and zookeeper instances only.
version: '2'
services:
zookeeper:
image: confluentinc/cp-zookeeper:5.1.0
hostname: zookeeper
container_name: zookeeper
ports:
- "2181:2181"
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
broker:
image: confluentinc/cp-enterprise-kafka:5.1.0
hostname: broker
container_name: broker
depends_on:
- zookeeper
ports:
- "9092:9092"
- "29092:29092"
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: 'zookeeper:2181'
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://broker:9092,PLAINTEXT_HOST://localhost:29092
KAFKA_METRIC_REPORTERS: io.confluent.metrics.reporter.ConfluentMetricsReporter
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 0
CONFLUENT_METRICS_REPORTER_BOOTSTRAP_SERVERS: broker:9092
CONFLUENT_METRICS_REPORTER_ZOOKEEPER_CONNECT: zookeeper:2181
CONFLUENT_METRICS_REPORTER_TOPIC_REPLICAS: 1
CONFLUENT_METRICS_ENABLE: 'true'
CONFLUENT_SUPPORT_CUSTOMER_ID: 'anonymous'
I can report that I have been able to connect to the cluster using a kafka-console-consumer, kafka-console-producer on the CLI and the REST API with no issues.
CONFIG
This is my typesafe config, I attempt to use a plaintext connection to connect to the client. I am trying to connect to the kafka broker without any authentication.
bootstrap.servers="localhost:29092"
acks = "all"
retries = 2
batch.size = 16384
linger.ms = 1
buffer.memory = 33554432
max.block.ms = 5000
CODE
val config = ConfigFactory.load().getConfig("akka.kafka.producer")
val stringSerializer = new StringSerializer()
ProducerSettings[String, String](config, stringSerializer, stringSerializer)
// some code has been omitted here.
Producer.plainSink(producerSettings)
EXCEPTION
This is the stacktrace that I receive, it tells me that there is no jaas config:
org.apache.kafka.common.KafkaException: Failed to construct kafka producer
at org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:456)
at org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:318)
at akka.kafka.ProducerSettings.createKafkaProducer(ProducerSettings.scala:226)
at akka.kafka.scaladsl.Producer$.$anonfun$flexiFlow$1(Producer.scala:155)
at akka.kafka.internal.ProducerStage$DefaultProducerStage.createLogic(ProducerStage.scala:41)
at akka.kafka.internal.ProducerStage$DefaultProducerStage.createLogic(ProducerStage.scala:33)
at akka.stream.stage.GraphStage.createLogicAndMaterializedValue(GraphStage.scala:93)
at akka.stream.impl.GraphStageIsland.materializeAtomic(PhasedFusingActorMaterializer.scala:630)
at akka.stream.impl.PhasedFusingActorMaterializer.materialize(PhasedFusingActorMaterializer.scala:450)
at akka.stream.impl.PhasedFusingActorMaterializer.materialize(PhasedFusingActorMaterializer.scala:415)
Caused by: java.lang.IllegalArgumentException: Could not find a 'KafkaClient' entry in the JAAS configuration. System property 'java.security.auth.login.config' is not set
at org.apache.kafka.common.security.JaasContext.defaultContext(JaasContext.java:133)
at org.apache.kafka.common.security.JaasContext.load(JaasContext.java:98)
at org.apache.kafka.common.security.JaasContext.loadClientContext(JaasContext.java:84)
at org.apache.kafka.common.network.ChannelBuilders.create(ChannelBuilders.java:119)
at org.apache.kafka.common.network.ChannelBuilders.clientChannelBuilder(ChannelBuilders.java:65)
at org.apache.kafka.clients.ClientUtils.createChannelBuilder(ClientUtils.java:88)
at org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:413)
at org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:318)
at akka.kafka.ProducerSettings.createKafkaProducer(ProducerSettings.scala:226)
at akka.kafka.scaladsl.Producer$.$anonfun$flexiFlow$1(Producer.scala:155)
How can I connect to the Kafka Cluster using no auth as required to run locally?
I have tried adding KAFKA_OPTS as an environment variable to the kafka service in docker-compose as well as adding it to the application.conf.
sasl.jaas.config="org.apache.kafka.common.security.plain.PlainLoginModule required username='confluent' password='confluent-secret';"
In the former case, some associated services such as the kafka-rest API failed. In the latter case, I get the following exception:
org.apache.kafka.common.KafkaException: Failed to construct kafka producer
at org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:456)
at org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:318)
at akka.kafka.ProducerSettings.createKafkaProducer(ProducerSettings.scala:226)
at akka.kafka.scaladsl.Producer$.$anonfun$flexiFlow$1(Producer.scala:155)
at akka.kafka.internal.ProducerStage$DefaultProducerStage.createLogic(ProducerStage.scala:41)
at akka.kafka.internal.ProducerStage$DefaultProducerStage.createLogic(ProducerStage.scala:33)
at akka.stream.stage.GraphStage.createLogicAndMaterializedValue(GraphStage.scala:93)
at akka.stream.impl.GraphStageIsland.materializeAtomic(PhasedFusingActorMaterializer.scala:630)
at akka.stream.impl.PhasedFusingActorMaterializer.materialize(PhasedFusingActorMaterializer.scala:450)
at akka.stream.impl.PhasedFusingActorMaterializer.materialize(PhasedFusingActorMaterializer.scala:415)
Caused by: org.apache.kafka.common.KafkaException: java.lang.IllegalArgumentException: No serviceName defined in either JAAS or Kafka config
at org.apache.kafka.common.network.SaslChannelBuilder.configure(SaslChannelBuilder.java:125)
at org.apache.kafka.common.network.ChannelBuilders.create(ChannelBuilders.java:140)
at org.apache.kafka.common.network.ChannelBuilders.clientChannelBuilder(ChannelBuilders.java:65)
at org.apache.kafka.clients.ClientUtils.createChannelBuilder(ClientUtils.java:88)
at org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:413)
at org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:318)
at akka.kafka.ProducerSettings.createKafkaProducer(ProducerSettings.scala:226)
at akka.kafka.scaladsl.Producer$.$anonfun$flexiFlow$1(Producer.scala:155)
at akka.kafka.internal.ProducerStage$DefaultProducerStage.createLogic(ProducerStage.scala:41)
at akka.kafka.internal.ProducerStage$DefaultProducerStage.createLogic(ProducerStage.scala:33)
Caused by: java.lang.IllegalArgumentException: No serviceName defined in either JAAS or Kafka config
at org.apache.kafka.common.security.kerberos.KerberosLogin.getServiceName(KerberosLogin.java:297)
at org.apache.kafka.common.security.kerberos.KerberosLogin.configure(KerberosLogin.java:87)
at org.apache.kafka.common.security.authenticator.LoginManager.<init>(LoginManager.java:52)
at org.apache.kafka.common.security.authenticator.LoginManager.acquireLoginManager(LoginManager.java:89)
at org.apache.kafka.common.network.SaslChannelBuilder.configure(SaslChannelBuilder.java:114)
at org.apache.kafka.common.network.ChannelBuilders.create(ChannelBuilders.java:140)
at org.apache.kafka.common.network.ChannelBuilders.clientChannelBuilder(ChannelBuilders.java:65)
at org.apache.kafka.clients.ClientUtils.createChannelBuilder(ClientUtils.java:88)
at org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:413)
at org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:318)

Kafka with SASL_PLAINTEXT authentication

I'm using the following docker-compose configuration:
app-zookeeper:
image: wurstmeister/zookeeper
container_name: app-zookeeper
ports:
- 2181:2181
app-kafka:
build: ../images/kafka
container_name: app-kafka
ports:
- 9092:9092
environment:
KAFKA_ADVERTISED_HOST_NAME: ${DOCKER_LOCAL_HOST}
KAFKA_ADVERTISED_PORT: 9092
KAFKA_ADVERTISED_LISTENERS: SASL_PLAINTEXT://:9092
KAFKA_LISTENERS: SASL_PLAINTEXT://:9092
KAFKA_ZOOKEEPER_CONNECT: app-zookeepr:2181
KAFKA_DELETE_TOPIC_ENBALE: "true"
KAFKA_SASL_ENABLED_MECHANISMS: SCRAM-SHA-512
KAFKA_SASL_MECHANISM_INTER_BROKER_PROTOCOL: SCRAM-SHA-512
KAFKA_SECURITY_INTER_BROKER_PROTOCOL: SASL_PLAINTEXT
CUSTOM_INIT_SCRIPT: "export KAFKA_OPTS=-Djava.security.auth.login.config=/opt/kafka/config/kafka_server_jaas.conf"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
File kafka_server_jaas.conf:
KafkaServer {
org.apache.kafka.common.security.scram.ScramLoginModule required
username="admin"
password="admin123";
};
On images/kafka I have a DockerFile:
FROM wurstmeister/kafka
# Authentication
COPY kafka_server_jaas.conf /opt/kafka/config/
# Define env vars for authentication
ENV CUSTOM_INIT_SCRIPT="export KAFKA_OPTS=-Djava.security.auth.login.config=/opt/kafka/config/kafka_server_jaas.conf"
ENV KAFKA_OPTS="-Djava.security.auth.login.config=/opt/kafka/config/kafka_server_jaas.conf"
# create user
RUN kafka-configs.sh --zookeeper <DOCKER_LOCAL_HOST>:2181 --alter --add-config='SCRAM-SHA-512=[password="admin123"]' --entity-type users --entity-name admin
# List users
RUN kafka-configs.sh --zookeeper <DOCKER_LOCAL_HOST>:2181 --describe --entity-type users
Then I start the zookeeper container and kafka containers:
On kafka container I got this error, and I am not able to connect.
ERROR [Controller id=1001, targetBrokerId=1001] Connection to node 1001 failed authentication due to: Authentication failed due to invalid credentials with SASL mechanism SCRAM-SHA-512 (org.apache.kafka.clients.NetworkClient)
on the kafka container:
I have the env var KAFKA_OPTS defined
KAFKA_OPTS=-Djava.security.auth.login.config=/opt/kafka/config/kafka_server_jaas.conf
Any clue?