Kafka with SASL_PLAINTEXT authentication - apache-kafka

I'm using the following docker-compose configuration:
app-zookeeper:
image: wurstmeister/zookeeper
container_name: app-zookeeper
ports:
- 2181:2181
app-kafka:
build: ../images/kafka
container_name: app-kafka
ports:
- 9092:9092
environment:
KAFKA_ADVERTISED_HOST_NAME: ${DOCKER_LOCAL_HOST}
KAFKA_ADVERTISED_PORT: 9092
KAFKA_ADVERTISED_LISTENERS: SASL_PLAINTEXT://:9092
KAFKA_LISTENERS: SASL_PLAINTEXT://:9092
KAFKA_ZOOKEEPER_CONNECT: app-zookeepr:2181
KAFKA_DELETE_TOPIC_ENBALE: "true"
KAFKA_SASL_ENABLED_MECHANISMS: SCRAM-SHA-512
KAFKA_SASL_MECHANISM_INTER_BROKER_PROTOCOL: SCRAM-SHA-512
KAFKA_SECURITY_INTER_BROKER_PROTOCOL: SASL_PLAINTEXT
CUSTOM_INIT_SCRIPT: "export KAFKA_OPTS=-Djava.security.auth.login.config=/opt/kafka/config/kafka_server_jaas.conf"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
File kafka_server_jaas.conf:
KafkaServer {
org.apache.kafka.common.security.scram.ScramLoginModule required
username="admin"
password="admin123";
};
On images/kafka I have a DockerFile:
FROM wurstmeister/kafka
# Authentication
COPY kafka_server_jaas.conf /opt/kafka/config/
# Define env vars for authentication
ENV CUSTOM_INIT_SCRIPT="export KAFKA_OPTS=-Djava.security.auth.login.config=/opt/kafka/config/kafka_server_jaas.conf"
ENV KAFKA_OPTS="-Djava.security.auth.login.config=/opt/kafka/config/kafka_server_jaas.conf"
# create user
RUN kafka-configs.sh --zookeeper <DOCKER_LOCAL_HOST>:2181 --alter --add-config='SCRAM-SHA-512=[password="admin123"]' --entity-type users --entity-name admin
# List users
RUN kafka-configs.sh --zookeeper <DOCKER_LOCAL_HOST>:2181 --describe --entity-type users
Then I start the zookeeper container and kafka containers:
On kafka container I got this error, and I am not able to connect.
ERROR [Controller id=1001, targetBrokerId=1001] Connection to node 1001 failed authentication due to: Authentication failed due to invalid credentials with SASL mechanism SCRAM-SHA-512 (org.apache.kafka.clients.NetworkClient)
on the kafka container:
I have the env var KAFKA_OPTS defined
KAFKA_OPTS=-Djava.security.auth.login.config=/opt/kafka/config/kafka_server_jaas.conf
Any clue?

Related

No JAAS configuration section named 'Client' was found in specified JAAS configuration file

Can't deploy Kafka with SASL authentication
Here is my docker-compose.yml
version: '3.1'
services:
zookeeper:
image: confluentinc/cp-zookeeper
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
ZOOKEEPER_INIT_LIMIT: 5
ZOOKEEPER_SYNC_LIMIT: 2
KAFKA_OPTS:
-Djava.security.auth.login.config=/home/etozhekim/IdeaProjects/veles-core/zookeeper_server_jaas.conf
-Dquorum.auth.enableSasl=true
-Dquorum.cnxn.threads.size=20
-Dzookeeper.authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenticationProvider
-DjaasLoginRenew=3600000
-DrequireClientAuthScheme=sasl
volumes:
- /home/etozhekim/IdeaProjects/veles-core/zookeeper_server_jaas.conf
networks:
- kafka-cluster-network
kafka:
image: confluentinc/cp-kafka
ports:
- "9092:9092"
depends_on:
- zookeeper
environment:
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_SASL_ENABLED_MECHANISMS: PLAIN
KAFKA_SASL_MECHANISM_INTER_BROKER_PROTOCOL: PLAIN
KAFKA_SECURITY_INTER_BROKER_PROTOCOL: SASL_PLAINTEXT
KAFKA_LISTENERS: SASL_PLAINTEXT://kafka:9092,SASL_PLAINTEXT://kafka:9092
KAFKA_ADVERTISED_LISTENERS: SASL_PLAINTEXT://localhost:9092,SASL_PLAINTEXT://localhost:9092
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
KAFKA_OPTS:
-Dzookeeper.sasl.client=true
-Djava.security.auth.login.config=/home/etozhekim/IdeaProjects/veles-core/kafka_server_jaas.conf
volumes:
- /home/etozhekim/IdeaProjects/veles-core/kafka_server_jaas.conf
Zookeeper is deployed without problems. But Kafka logs:
[2023-02-02 11:49:24,708] WARN SASL configuration failed. Will continue connection to Zookeeper server without SASL authentication, if Zookeeper server allows it. (org.apache.zookeeper.ClientCnxn)
javax.security.auth.login.LoginException: No JAAS configuration section named 'Client' was found in specified JAAS configuration file: '/home/etozhekim/IdeaProjects/veles-core/kafka_server_jaas.conf'
kafka_server_jaas.conf
KafkaServer {
org.apache.kafka.common.security.scram.ScramLoginModule required
username="kafkabroker"
password="password";
};
Client {
org.apache.kafka.common.security.plain.PlainLoginModule required
username="admin"
password="password";
};
zookeper_server_jaas.conf
Server {
org.apache.zookeeper.server.auth.DigestLoginModule required
user_admin="password";
};
The error is saying it wants a Zookeeper Client JAAS setting whereas you've only configured a Kafka Client config in the broker.
Also, /home/etozhekim doesn't exist in Confluent images.
Add a valid right-hand side container mapping to the volume, such as :/tmp/jaas.conf
Then use -Djava.security.auth.login.config=/tmp/jaas.conf

Kafka cluster on muliple nodes with docker-compose

I'm trying to set up Kafka cluster on 3 nodes, with docker-compose.
node:
version: '3.1'
services:
zookeeper:
image: zookeeper:3.7.0
container_name: zookeeper
hostname: zookeeper
ports:
- 2181:2181
- 2888:2888
- 3888:3888
environment:
ZOO_MY_ID: 1
ZOO_SERVERS: server.1=0.0.0.0:2888:3888;2181 server.2=node2:2888:3888;2181 server.3=node3:2888:3888;2181
kafka:
image: wurstmeister/kafka:2.13-2.7.0
container_name: kafka
hostname: kafka
depends_on:
- zookeeper
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: node1:2181,node2:2181,node3:2181
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: INSIDE:PLAINTEXT,OUTSIDE:PLAINTEXT
KAFKA_INTER_BROKER_LISTENER_NAME: INSIDE
KAFKA_AUTO_CREATE_TOPICS_ENABLE: 'false'
KAFKA_ADVERTISED_LISTENERS: INSIDE://:9092,OUTSIDE://node1:9095
KAFKA_LISTENERS: INSIDE://:9092,OUTSIDE://:9095
KAFKA_CREATE_TOPICS: "test:4:1"
ports:
- 9095:9095
node:
version: '3.1'
services:
zookeeper:
image: zookeeper:3.7.0
container_name: zookeeper
hostname: zookeeper
ports:
- 2181:2181
- 2888:2888
- 3888:3888
environment:
ZOO_MY_ID: 2
ZOO_SERVERS: server.1=node1:2888:3888;2181 server.2=0.0.0.0:2888:3888;2181 server.3=node3:2888:3888;2181
kafka:
image: wurstmeister/kafka:2.13-2.7.0
container_name: kafka
hostname: kafka
depends_on:
- zookeeper
environment:
KAFKA_BROKER_ID: 2
KAFKA_ZOOKEEPER_CONNECT: node1:2181,node2:2181,node3:2181
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: INSIDE:PLAINTEXT,OUTSIDE:PLAINTEXT
KAFKA_INTER_BROKER_LISTENER_NAME: INSIDE
KAFKA_AUTO_CREATE_TOPICS_ENABLE: 'false'
KAFKA_ADVERTISED_LISTENERS: INSIDE://:9092,OUTSIDE://node2:9095
KAFKA_LISTENERS: INSIDE://:9092,OUTSIDE://:9095
#KAFKA_CREATE_TOPICS: "test:4:1"
ports:
- 9095:9095
node:
version: '3.1'
services:
zookeeper:
image: zookeeper:3.7.0
container_name: zookeeper
hostname: zookeeper
ports:
- 2181:2181
- 2888:2888
- 3888:3888
environment:
ZOO_MY_ID: 3
ZOO_SERVERS: server.1=node1:2888:3888;2181 server.2=node2:2888:3888;2181 server.3=0.0.0.0:2888:3888;2181
kafka:
image: wurstmeister/kafka:2.13-2.7.0
container_name: kafka
hostname: kafka
depends_on:
- zookeeper
environment:
KAFKA_BROKER_ID: 3
KAFKA_ZOOKEEPER_CONNECT: node1:2181,node2:2181,node3:2181
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: INSIDE:PLAINTEXT,OUTSIDE:PLAINTEXT
KAFKA_INTER_BROKER_LISTENER_NAME: INSIDE
KAFKA_AUTO_CREATE_TOPICS_ENABLE: 'false'
KAFKA_ADVERTISED_LISTENERS: INSIDE://:9092,OUTSIDE://node3:9095
KAFKA_LISTENERS: INSIDE://:9092,OUTSIDE://:9095
#KAFKA_CREATE_TOPICS: "test:4:1"
ports:
- 9095:9095
When I run it (on respective nodes):
docker-compose -f docker-compose-node1.yml -d
docker-compose -f docker-compose-node2.yml -d
docker-compose -f docker-compose-node3.yml -d
If I check logs, there are no errors. On contrary I can see something like:
INFO Registered broker 1 at path /brokers/ids/1 ...
INFO Registered broker 2 at path /brokers/ids/2 ...
INFO Registered broker 3 at path /brokers/ids/3 ...
Two things that are strange, first on the first node where I try to create a topic, logs stop at
creating topics: test:4:1
Created topic test.
If I set up a single node (localhost) everything sets up just fine. The second thing is that node2 is always elected as controller.
This is what I get with kafkacat:
kafkacat -b node1:9095,node2:9095,node3:9095 -L
Metadata for all topics (from broker 2: node2:9095/2):
3 brokers:
broker 2 at node2:9095 (controller)
broker 3 at node3:9095
broker 1 at node1:9095
1 topics:
topic "test" with 4 partitions:
partition 0, leader 2, replicas: 2, isrs: 2
partition 1, leader 3, replicas: 3, isrs: 3
partition 2, leader 1, replicas: 1, isrs: 1
partition 3, leader 2, replicas: 2, isrs: 2
But if I run:
docker exec -it kafka kafka-console-consumer.sh --bootstrap-server node1:9095,node2:9095,node3:9095 --topic test --from-beginning
I get loads of warnings:
[2021-04-12 14:31:06,848] WARN [Consumer clientId=consumer-console-consumer-95791-1, groupId=console-consumer-95791] Received unknown topic or partition error in ListOffset request for partition test-2 (org.apache.kafka.clients.consumer.internals.Fetcher)
and
[2021-04-12 14:31:06,848] WARN [Consumer clientId=consumer-console-consumer-95791-1, groupId=console-consumer-95791] Received unknown topic or partition error in ListOffset request for partition test-1 (org.apache.kafka.clients.consumer.internals.Fetcher)
Like there's no partition 1 and 2.
If I try to produce a message:
docker exec -it kafka-tn kafka-console-producer.sh --bootstrap-server tn00:9095,tn01:9095,tn02:9095 --topic test
I get loads of errors:
[2021-04-12 14:40:33,273] WARN [Producer clientId=console-producer] Error while fetching metadata with correlation id 41 : {test=UNKNOWN_TOPIC_OR_PARTITION} (org.apache.kafka.clients.NetworkClient)
Am I doing something crazy here or it's just some minor misconfiguration?

Zookeeper: cnxn.saslServer is null and Kafka:the quorum member's saslToken is null

To provide access to only kafka for creating, deleting topics, I am creating a plaintext SASL security between kafka and zookeeper. I get the following error and can not figure out why.
zookeeper_1 | 2020-07-20 10:19:06,907 [myid:] - ERROR [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:ZooKeeperServer#1063] - cnxn.saslServer is null: cnxn object did not initialize its saslServer properly.
kafka_1 | [2020-07-20 10:19:06,909] ERROR SASL authentication failed using login context 'Client' with exception: {} (org.apache.zookeeper.client.ZooKeeperSaslClient)
kafka_1 | javax.security.sasl.SaslException: Error in authenticating with a Zookeeper
Quorum member: the quorum member's saslToken is null.
kafka_1 | at org.apache.zookeeper.client.ZooKeeperSaslClient.createSaslToken(ZooKeeperSaslClient.java:279)
kafka_1 | at org.apache.zookeeper.client.ZooKeeperSaslClient.respondToServer(ZooKeeperSaslClient.java:242)
kafka_1 | at org.apache.zookeeper.ClientCnxn$SendThread.readResponse(ClientCnxn.java:805)
kafka_1 | at org.apache.zookeeper.ClientCnxnSocketNIO.doIO(ClientCnxnSocketNIO.java:94)
kafka_1 | at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:366)
kafka_1 | at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1141)
kafka_1 | [2020-07-20 10:19:06,912] ERROR [ZooKeeperClient Kafka server] Auth failed. (kafka.zookeeper.ZooKeeperClient)
docker-compose-sasl-plaintext.yml
version: '3'
services:
zookeeper:
image: wurstmeister/zookeeper
ports:
- "2181:2181"
environment:
KAFKA_OPTS: '-Djava.security.auth.login.config=/opt/kafka/config/kafka_server_jaas.conf -Dzookeeper.authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenticationProvider'
volumes:
- ./sasl-plaintext/kafka_server_jaas.conf:/opt/kafka/config/kafka_server_jaas.conf
kafka:
build: .
ports:
- "9092:9092"
environment:
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_LISTENERS: "SASL_PLAINTEXT://kafka:9092"
KAFKA_ADVERTISED_LISTENERS: "SASL_PLAINTEXT://kafka:9092"
KAFKA_SECURITY_INTER_BROKER_PROTOCOL: "SASL_PLAINTEXT"
KAFKA_SASL_MECHANISM_INTER_BROKER_PROTOCOL: "PLAIN"
KAFKA_SASL_ENABLED_MECHANISMS: "PLAIN"
KAFKA_OPTS: "-Djava.security.auth.login.config=/opt/kafka/config/kafka_server_jaas.conf"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- ./sasl-plaintext/kafka_server_jaas.conf:/opt/kafka/config/kafka_server_jaas.conf
kafka_server_jaas.conf
KafkaServer {
org.apache.kafka.common.security.plain.PlainLoginModule required
username="wl"
password="wl-secret"
user_wl="wl-secret";
};
Server {
org.apache.kafka.common.security.plain.PlainLoginModule required
username="wl"
password="wl-secret"
user_wl="wl-secret";
};
Client {
org.apache.kafka.common.security.plain.PlainLoginModule required
username="wl"
password="wl-secret";
};
your kafka_server_jaas.conf configured 'Client' option, it is used for kafka connect to zookeeper with sasl .
but your zookeeper did't config server user & client user .

Getting Secure Kafka and Schema registry to talk to each other, as well as spring boot configuration for it

Hey I have been trying to solve this for about two weeks now.
Basically I want kafka to be on SSL and schema registry on HTTPS. No kerberos to be used.
I have a two spring services, one is a producer and the other a consumer (avro)
this is my current docker-compose, with it when I send a request to my producer it doesn't throw any errors in the application, the request times out but Kafka logs show kafka_1 | [2019-12-03 09:53:27,454] INFO [SocketServer brokerId=1] Failed authentication with /172.18.0.1 (SSL handshake failed) (org.apache.kafka.common.network.Selector)
when I uncomment the lines in the docker-compose i get PKIX path building failed and some other error specifying that Avro cannot be serialized or something like that
zookeeper:
image: confluentinc/cp-zookeeper:5.3.0
ports:
- 2181:2181
environment:
ZOOKEEPER_CLIENT_PORT: "2181"
ZOOKEEPER_TICK_TIME: "2000"
kafka:
image: confluentinc/cp-kafka:5.3.0
ports:
- 29094:29094
environment:
KAFKA_BROKER_ID: 1
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,SSL:SSL
KAFKA_SECURITY_PROTOCOL: SSL
KAFKA_INTER_BROKER_PROTOCOL: SSL
KAFKA_INTER_BROKER_LISTENER_NAME: SSL
KAFKA_LISTENERS: SSL://kafka:29094,PLAINTEXT://kafka:9092
KAFKA_ADVERTISED_LISTENERS: SSL://kafka:29094,PLAINTEXT://kafka:9092
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_AUTO_CREATE_TOPICS_ENABLE: "true"
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
# KAFKA_SSL_CLIENT_AUTH: required
KAFKA_SSL_KEYSTORE_FILENAME: kafka.server.keystore.jks
KAFKA_SSL_TRUSTSTORE_FILENAME: kafka.server.truststore.jks
KAFKA_SSL_KEY_CREDENTIALS: key_credential
KAFKA_SSL_KEYSTORE_CREDENTIALS: key_credential
KAFKA_SSL_KEYSTORE_LOCATION: /etc/kafka/secrets/kafka.server.keystore.jks
KAFKA_SSL_KEYSTORE_PASSWORD: PASSWORD
KAFKA_SSL_KEY_PASSWORD: PASSWORD
KAFKA_SSL_TRUSTSTORE_LOCATION: /etc/kafka/secrets/kafka.server.truststore.jks
KAFKA_SSL_TRUSTSTORE_PASSWORD: PASSWORD
KAFKA_SSL_TRUSTSTORE_CREDENTIALS: key_credential
KAFKA_HEAP_OPTS: -Xmx456M
KAFKA_SSL_ENDPOINT_IDENTIFICATION_ALGORITHM: ""
# KAFKA_AUTHORIZER_CLASS_NAME: kafka.security.auth.SimpleAclAuthorizer
KAFKA_ALLOW_EVERYONE_IF_NO_ACL_FOUND: "true"
KAFKA_SUPER_USERS: User:CN=Kafka-domain
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /home/xubu/Documents:/etc/kafka/secrets
schema-registry:
image: confluentinc/cp-schema-registry:5.3.0
depends_on:
- zookeeper
- kafka
ports:
- 8181:8181
- 8085:8085
- 8086:8086
environment:
SCHEMA_REGISTRY_HOST_NAME: schema-registry
SCHEMA_REGISTRY_LISTENERS: http://schema-registry:8085, https://schema-registry:8086
SCHEMA_REGISTRY_KAFKASTORE_CONNECTION_URL: zookeeper:2181
SCHEMA_REGISTRY_KAFKASTORE_SECURITY_PROTOCOL: SSL
SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS: SSL://kafka:29094
SCHEMA_REGISTRY_KAFKASTORE_SSL_TRUSTSTORE_LOCATION: /etc/kafka/client/kafka.client.truststore.jks
SCHEMA_REGISTRY_KAFKASTORE_SSL_TRUSTSTORE_PASSWORD: PASSWORD
SCHEMA_REGISTRY_KAFKASTORE_SSL_KEYSTORE_LOCATION: /etc/kafka/client/kafka.client.keystore.jks
SCHEMA_REGISTRY_KAFKASTORE_SSL_KEYSTORE_PASSWORD: PASSWORD
SCHEMA_REGISTRY_KAFKASTORE_SSL_KEY_PASSWORD: PASSWORD
SCHEMA_REGISTRY_KAFKASTORE_SSL_ENDPOINT_IDENTIFICATION_ALGORITHM: ""
SCHEMA_REGISTRY_SSL_TRUSTSTORE_LOCATION: /etc/kafka/client/kafka.client.truststore.jks
SCHEMA_REGISTRY_SSL_TRUSTSTORE_PASSWORD: PASSWORD
SCHEMA_REGISTRY_SSL_KEYSTORE_LOCATION: /etc/kafka/client/kafka.client.keystore.jks
SCHEMA_REGISTRY_SSL_KEYSTORE_PASSWORD: PASSWORD
SCHEMA_REGISTRY_SSL_KEY_PASSWORD: PASSWORD
SCHEMA_REGISTRY_SSL_ENDPOINT_IDENTIFICATION_ALGORITHM: ""
SCHEMA_REGISTRY_SECURITY_PROTOCOL: SSL
SCHEMA_REGISTRY_INTER_INSTANCE_PROTOCOL: "https"
#SCHEMA_REGISTRY_SSL_CLIENT_AUTH: 'true'
volumes:
- /home/xubu/Documents:/etc/kafka/client
- /home/xubu/Documents:/etc/kafka/consumer
Below is part of my spring boot application.yaml
spring:
kafka:
bootstrap-servers: kafka:29094
producer:
key-serializer: org.apache.kafka.common.serialization.StringSerializer
value-serializer: io.confluent.kafka.serializers.KafkaAvroSerializer
ssl:
key-store-location: /home/xubu/Documents/kafka.client.keystore.jks
key-password: PASSWORD
key-store-password: PASSWORD
trust-store-location: /home/xubu/Documents/kafka.client.truststore.jks
trust-store-password: PASSWORD
protocol: SSL
properties:
value:
subject:
name:
strategy: io.confluent.kafka.serializers.subject.RecordNameStrategy
value-serializer: io.confluent.kafka.serializers.KafkaAvroSerializer
ssl.endpoint.identification.algorithm: https
schema.registry.url: https://schema-registry:8086
ssl:
trust-store-location: /home/xubu/Documents/kafka.client.truststore.jks
trust-store-password: PASSWORD
key-store-location: /home/xubu/Documents/kafka.client.keystore.jks
key-store-password: PASSWORD
key-password: PASSWORD
protocol: SSL
key-store-type: jks
trust-store-type: jks
This has been my pain for the last two or so weeks, and it's an intro into trying out Access Control Lists on schema-registry

KafkaProducer cannot be created due to missing java.security.auth.login.config

I am attempting to create a KafkaProducer using the akka-stream-kafka library.
INFRASTRUCTURE
Uses docker-compose, showing the kafka and zookeeper instances only.
version: '2'
services:
zookeeper:
image: confluentinc/cp-zookeeper:5.1.0
hostname: zookeeper
container_name: zookeeper
ports:
- "2181:2181"
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
broker:
image: confluentinc/cp-enterprise-kafka:5.1.0
hostname: broker
container_name: broker
depends_on:
- zookeeper
ports:
- "9092:9092"
- "29092:29092"
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: 'zookeeper:2181'
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://broker:9092,PLAINTEXT_HOST://localhost:29092
KAFKA_METRIC_REPORTERS: io.confluent.metrics.reporter.ConfluentMetricsReporter
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 0
CONFLUENT_METRICS_REPORTER_BOOTSTRAP_SERVERS: broker:9092
CONFLUENT_METRICS_REPORTER_ZOOKEEPER_CONNECT: zookeeper:2181
CONFLUENT_METRICS_REPORTER_TOPIC_REPLICAS: 1
CONFLUENT_METRICS_ENABLE: 'true'
CONFLUENT_SUPPORT_CUSTOMER_ID: 'anonymous'
I can report that I have been able to connect to the cluster using a kafka-console-consumer, kafka-console-producer on the CLI and the REST API with no issues.
CONFIG
This is my typesafe config, I attempt to use a plaintext connection to connect to the client. I am trying to connect to the kafka broker without any authentication.
bootstrap.servers="localhost:29092"
acks = "all"
retries = 2
batch.size = 16384
linger.ms = 1
buffer.memory = 33554432
max.block.ms = 5000
CODE
val config = ConfigFactory.load().getConfig("akka.kafka.producer")
val stringSerializer = new StringSerializer()
ProducerSettings[String, String](config, stringSerializer, stringSerializer)
// some code has been omitted here.
Producer.plainSink(producerSettings)
EXCEPTION
This is the stacktrace that I receive, it tells me that there is no jaas config:
org.apache.kafka.common.KafkaException: Failed to construct kafka producer
at org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:456)
at org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:318)
at akka.kafka.ProducerSettings.createKafkaProducer(ProducerSettings.scala:226)
at akka.kafka.scaladsl.Producer$.$anonfun$flexiFlow$1(Producer.scala:155)
at akka.kafka.internal.ProducerStage$DefaultProducerStage.createLogic(ProducerStage.scala:41)
at akka.kafka.internal.ProducerStage$DefaultProducerStage.createLogic(ProducerStage.scala:33)
at akka.stream.stage.GraphStage.createLogicAndMaterializedValue(GraphStage.scala:93)
at akka.stream.impl.GraphStageIsland.materializeAtomic(PhasedFusingActorMaterializer.scala:630)
at akka.stream.impl.PhasedFusingActorMaterializer.materialize(PhasedFusingActorMaterializer.scala:450)
at akka.stream.impl.PhasedFusingActorMaterializer.materialize(PhasedFusingActorMaterializer.scala:415)
Caused by: java.lang.IllegalArgumentException: Could not find a 'KafkaClient' entry in the JAAS configuration. System property 'java.security.auth.login.config' is not set
at org.apache.kafka.common.security.JaasContext.defaultContext(JaasContext.java:133)
at org.apache.kafka.common.security.JaasContext.load(JaasContext.java:98)
at org.apache.kafka.common.security.JaasContext.loadClientContext(JaasContext.java:84)
at org.apache.kafka.common.network.ChannelBuilders.create(ChannelBuilders.java:119)
at org.apache.kafka.common.network.ChannelBuilders.clientChannelBuilder(ChannelBuilders.java:65)
at org.apache.kafka.clients.ClientUtils.createChannelBuilder(ClientUtils.java:88)
at org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:413)
at org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:318)
at akka.kafka.ProducerSettings.createKafkaProducer(ProducerSettings.scala:226)
at akka.kafka.scaladsl.Producer$.$anonfun$flexiFlow$1(Producer.scala:155)
How can I connect to the Kafka Cluster using no auth as required to run locally?
I have tried adding KAFKA_OPTS as an environment variable to the kafka service in docker-compose as well as adding it to the application.conf.
sasl.jaas.config="org.apache.kafka.common.security.plain.PlainLoginModule required username='confluent' password='confluent-secret';"
In the former case, some associated services such as the kafka-rest API failed. In the latter case, I get the following exception:
org.apache.kafka.common.KafkaException: Failed to construct kafka producer
at org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:456)
at org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:318)
at akka.kafka.ProducerSettings.createKafkaProducer(ProducerSettings.scala:226)
at akka.kafka.scaladsl.Producer$.$anonfun$flexiFlow$1(Producer.scala:155)
at akka.kafka.internal.ProducerStage$DefaultProducerStage.createLogic(ProducerStage.scala:41)
at akka.kafka.internal.ProducerStage$DefaultProducerStage.createLogic(ProducerStage.scala:33)
at akka.stream.stage.GraphStage.createLogicAndMaterializedValue(GraphStage.scala:93)
at akka.stream.impl.GraphStageIsland.materializeAtomic(PhasedFusingActorMaterializer.scala:630)
at akka.stream.impl.PhasedFusingActorMaterializer.materialize(PhasedFusingActorMaterializer.scala:450)
at akka.stream.impl.PhasedFusingActorMaterializer.materialize(PhasedFusingActorMaterializer.scala:415)
Caused by: org.apache.kafka.common.KafkaException: java.lang.IllegalArgumentException: No serviceName defined in either JAAS or Kafka config
at org.apache.kafka.common.network.SaslChannelBuilder.configure(SaslChannelBuilder.java:125)
at org.apache.kafka.common.network.ChannelBuilders.create(ChannelBuilders.java:140)
at org.apache.kafka.common.network.ChannelBuilders.clientChannelBuilder(ChannelBuilders.java:65)
at org.apache.kafka.clients.ClientUtils.createChannelBuilder(ClientUtils.java:88)
at org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:413)
at org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:318)
at akka.kafka.ProducerSettings.createKafkaProducer(ProducerSettings.scala:226)
at akka.kafka.scaladsl.Producer$.$anonfun$flexiFlow$1(Producer.scala:155)
at akka.kafka.internal.ProducerStage$DefaultProducerStage.createLogic(ProducerStage.scala:41)
at akka.kafka.internal.ProducerStage$DefaultProducerStage.createLogic(ProducerStage.scala:33)
Caused by: java.lang.IllegalArgumentException: No serviceName defined in either JAAS or Kafka config
at org.apache.kafka.common.security.kerberos.KerberosLogin.getServiceName(KerberosLogin.java:297)
at org.apache.kafka.common.security.kerberos.KerberosLogin.configure(KerberosLogin.java:87)
at org.apache.kafka.common.security.authenticator.LoginManager.<init>(LoginManager.java:52)
at org.apache.kafka.common.security.authenticator.LoginManager.acquireLoginManager(LoginManager.java:89)
at org.apache.kafka.common.network.SaslChannelBuilder.configure(SaslChannelBuilder.java:114)
at org.apache.kafka.common.network.ChannelBuilders.create(ChannelBuilders.java:140)
at org.apache.kafka.common.network.ChannelBuilders.clientChannelBuilder(ChannelBuilders.java:65)
at org.apache.kafka.clients.ClientUtils.createChannelBuilder(ClientUtils.java:88)
at org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:413)
at org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:318)