Zookeeper: cnxn.saslServer is null and Kafka:the quorum member's saslToken is null - apache-kafka

To provide access to only kafka for creating, deleting topics, I am creating a plaintext SASL security between kafka and zookeeper. I get the following error and can not figure out why.
zookeeper_1 | 2020-07-20 10:19:06,907 [myid:] - ERROR [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:ZooKeeperServer#1063] - cnxn.saslServer is null: cnxn object did not initialize its saslServer properly.
kafka_1 | [2020-07-20 10:19:06,909] ERROR SASL authentication failed using login context 'Client' with exception: {} (org.apache.zookeeper.client.ZooKeeperSaslClient)
kafka_1 | javax.security.sasl.SaslException: Error in authenticating with a Zookeeper
Quorum member: the quorum member's saslToken is null.
kafka_1 | at org.apache.zookeeper.client.ZooKeeperSaslClient.createSaslToken(ZooKeeperSaslClient.java:279)
kafka_1 | at org.apache.zookeeper.client.ZooKeeperSaslClient.respondToServer(ZooKeeperSaslClient.java:242)
kafka_1 | at org.apache.zookeeper.ClientCnxn$SendThread.readResponse(ClientCnxn.java:805)
kafka_1 | at org.apache.zookeeper.ClientCnxnSocketNIO.doIO(ClientCnxnSocketNIO.java:94)
kafka_1 | at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:366)
kafka_1 | at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1141)
kafka_1 | [2020-07-20 10:19:06,912] ERROR [ZooKeeperClient Kafka server] Auth failed. (kafka.zookeeper.ZooKeeperClient)
docker-compose-sasl-plaintext.yml
version: '3'
services:
zookeeper:
image: wurstmeister/zookeeper
ports:
- "2181:2181"
environment:
KAFKA_OPTS: '-Djava.security.auth.login.config=/opt/kafka/config/kafka_server_jaas.conf -Dzookeeper.authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenticationProvider'
volumes:
- ./sasl-plaintext/kafka_server_jaas.conf:/opt/kafka/config/kafka_server_jaas.conf
kafka:
build: .
ports:
- "9092:9092"
environment:
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_LISTENERS: "SASL_PLAINTEXT://kafka:9092"
KAFKA_ADVERTISED_LISTENERS: "SASL_PLAINTEXT://kafka:9092"
KAFKA_SECURITY_INTER_BROKER_PROTOCOL: "SASL_PLAINTEXT"
KAFKA_SASL_MECHANISM_INTER_BROKER_PROTOCOL: "PLAIN"
KAFKA_SASL_ENABLED_MECHANISMS: "PLAIN"
KAFKA_OPTS: "-Djava.security.auth.login.config=/opt/kafka/config/kafka_server_jaas.conf"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- ./sasl-plaintext/kafka_server_jaas.conf:/opt/kafka/config/kafka_server_jaas.conf
kafka_server_jaas.conf
KafkaServer {
org.apache.kafka.common.security.plain.PlainLoginModule required
username="wl"
password="wl-secret"
user_wl="wl-secret";
};
Server {
org.apache.kafka.common.security.plain.PlainLoginModule required
username="wl"
password="wl-secret"
user_wl="wl-secret";
};
Client {
org.apache.kafka.common.security.plain.PlainLoginModule required
username="wl"
password="wl-secret";
};

your kafka_server_jaas.conf configured 'Client' option, it is used for kafka connect to zookeeper with sasl .
but your zookeeper did't config server user & client user .

Related

No JAAS configuration section named 'Client' was found in specified JAAS configuration file

Can't deploy Kafka with SASL authentication
Here is my docker-compose.yml
version: '3.1'
services:
zookeeper:
image: confluentinc/cp-zookeeper
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
ZOOKEEPER_INIT_LIMIT: 5
ZOOKEEPER_SYNC_LIMIT: 2
KAFKA_OPTS:
-Djava.security.auth.login.config=/home/etozhekim/IdeaProjects/veles-core/zookeeper_server_jaas.conf
-Dquorum.auth.enableSasl=true
-Dquorum.cnxn.threads.size=20
-Dzookeeper.authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenticationProvider
-DjaasLoginRenew=3600000
-DrequireClientAuthScheme=sasl
volumes:
- /home/etozhekim/IdeaProjects/veles-core/zookeeper_server_jaas.conf
networks:
- kafka-cluster-network
kafka:
image: confluentinc/cp-kafka
ports:
- "9092:9092"
depends_on:
- zookeeper
environment:
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_SASL_ENABLED_MECHANISMS: PLAIN
KAFKA_SASL_MECHANISM_INTER_BROKER_PROTOCOL: PLAIN
KAFKA_SECURITY_INTER_BROKER_PROTOCOL: SASL_PLAINTEXT
KAFKA_LISTENERS: SASL_PLAINTEXT://kafka:9092,SASL_PLAINTEXT://kafka:9092
KAFKA_ADVERTISED_LISTENERS: SASL_PLAINTEXT://localhost:9092,SASL_PLAINTEXT://localhost:9092
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
KAFKA_OPTS:
-Dzookeeper.sasl.client=true
-Djava.security.auth.login.config=/home/etozhekim/IdeaProjects/veles-core/kafka_server_jaas.conf
volumes:
- /home/etozhekim/IdeaProjects/veles-core/kafka_server_jaas.conf
Zookeeper is deployed without problems. But Kafka logs:
[2023-02-02 11:49:24,708] WARN SASL configuration failed. Will continue connection to Zookeeper server without SASL authentication, if Zookeeper server allows it. (org.apache.zookeeper.ClientCnxn)
javax.security.auth.login.LoginException: No JAAS configuration section named 'Client' was found in specified JAAS configuration file: '/home/etozhekim/IdeaProjects/veles-core/kafka_server_jaas.conf'
kafka_server_jaas.conf
KafkaServer {
org.apache.kafka.common.security.scram.ScramLoginModule required
username="kafkabroker"
password="password";
};
Client {
org.apache.kafka.common.security.plain.PlainLoginModule required
username="admin"
password="password";
};
zookeper_server_jaas.conf
Server {
org.apache.zookeeper.server.auth.DigestLoginModule required
user_admin="password";
};
The error is saying it wants a Zookeeper Client JAAS setting whereas you've only configured a Kafka Client config in the broker.
Also, /home/etozhekim doesn't exist in Confluent images.
Add a valid right-hand side container mapping to the volume, such as :/tmp/jaas.conf
Then use -Djava.security.auth.login.config=/tmp/jaas.conf

Failed to send HTTP request to schema-registry

I am trying to setup a local kafka-connect stack with docker-compose and I have a problem with my scala producer that's supposed to send avro messages to a kafka topic using schema registry.
In my producer (scala) code I do the following:
val kafkaBootstrapServer = "kafka:9092"
val schemaRegistryUrl = "http://schema-registry:8081"
val topicName = "test"
val props = new Properties()
props.put("bootstrap.servers", kafkaBootstrapServer)
props.put("schema.registry.url", schemaRegistryUrl)
props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer")
props.put("value.serializer", "io.confluent.kafka.serializers.KafkaAvroSerializer")
props.put("acks", "1")
and my docker-compose script reads:
---
version: '2'
services:
zookeeper:
image: confluentinc/cp-zookeeper:5.5.0
hostname: zookeeper
container_name: zookeeper
ports:
- "2181:2181"
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
kafka:
image: confluentinc/cp-kafka:5.5.0
hostname: kafka
container_name: kafka
depends_on:
- zookeeper
ports:
- 9092:9092
environment:
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_ADVERTISED_LISTENERS: "PLAINTEXT://kafka:9092"
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
KAFKA_CREATE_TOPICS: "test:1:1"
schema-registry:
image: confluentinc/cp-schema-registry:5.5.0
hostname: schema-registry
container_name: schema-registry
depends_on:
- zookeeper
- kafka
ports:
- "8081:8081"
environment:
SCHEMA_REGISTRY_HOST_NAME: schema-registry
SCHEMA_REGISTRY_KAFKASTORE_CONNECTION_URL: zookeeper:2181
producer:
image: producer-app:1.0
depends_on:
- schema-registry
- kafka
EDIT: now the schema registry seems to be up:
schema-registry | [2021-01-17 22:27:27,704] INFO HV000001: Hibernate Validator 6.0.17.Final (org.hibernate.validator.internal.util.Version)
kafka | [2021-01-17 22:27:27,918] INFO [Controller id=1001] Processing automatic preferred replica leader election (kafka.controller.KafkaController)
kafka | [2021-01-17 22:27:27,919] TRACE [Controller id=1001] Checking need to trigger auto leader balancing (kafka.controller.KafkaController)
kafka | [2021-01-17 22:27:27,923] DEBUG [Controller id=1001] Topics not in preferred replica for broker 1001 Map() (kafka.controller.KafkaController)
kafka | [2021-01-17 22:27:27,924] TRACE [Controller id=1001] Leader imbalance ratio for broker 1001 is 0.0 (kafka.controller.KafkaController)
schema-registry | [2021-01-17 22:27:28,010] INFO JVM Runtime does not support Modules (org.eclipse.jetty.util.TypeUtil)
schema-registry | [2021-01-17 22:27:28,011] INFO Started o.e.j.s.ServletContextHandler#22d6f11{/,null,AVAILABLE} (org.eclipse.jetty.server.handler.ContextHandler)
schema-registry | [2021-01-17 22:27:28,035] INFO Started o.e.j.s.ServletContextHandler#15eebbff{/ws,null,AVAILABLE} (org.eclipse.jetty.server.handler.ContextHandler)
schema-registry | [2021-01-17 22:27:28,058] INFO Started NetworkTrafficServerConnector#2698dc7{HTTP/1.1,[http/1.1]}{0.0.0.0:8081} (org.eclipse.jetty.server.AbstractConnector)
schema-registry | [2021-01-17 22:27:28,059] INFO Started #4137ms (org.eclipse.jetty.server.Server)
schema-registry | [2021-01-17 22:27:28,059] INFO Server started, listening for requests... (io.confluent.kafka.schemaregistry.rest.SchemaRegistryMain)
but prior to this, during the execution of the script I get:
schema-registry | ===> Launching ...
schema-registry | ===> Launching schema-registry ...
producer_1 | [main] ERROR io.confluent.kafka.schemaregistry.client.rest.RestService - Failed to send HTTP request to endpoint: http://schema-registry:8081/subjects/test-value/versions
producer_1 | java.net.ConnectException: Connection refused (Connection refused)
Could this be due to some dependency issue? It's like it is running the producer before completely starting the schema-registry! I did put depends on - schema-registry for the producer ...
It looks like the cause here was that your app was trying to call the Schema Registry before it had finished starting up. Perhaps your app should include some error handling for this condition and maybe retry after a backoff period on the first x failures?
For anyone else going through this, I was getting a connection refused error with my schema registry for the longest time. I have my schema-registry running on a docker container with a separate container for a REST API that is trying to use the schema-registry container. What fixed it for me was changing my connection URL in the REST container from http://localhost:8081 -> http://schema-registry-server:8081.
Connecting to the Schema Registry in the REST container:
schema_registry = SchemaRegistryClient(
url={
"url": "http://schema-registry-server:8081"
},
)
Here's the schema registry part of my docker compose file
# docker-compose.yml
schema-registry-server:
image: confluentinc/cp-schema-registry
hostname: schema-registry-server
container_name: schema-registry-server
depends_on:
- kafka
- zookeeper
ports:
- 8081:8081
environment:
- SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS=kafka:9092
- SCHEMA_REGISTRY_KAFKASTORE_CONNECTION_URL=zookeeper:32181
- SCHEMA_REGISTRY_HOST_NAME=schema-registry-server
- SCHEMA_REGISTRY_LISTENERS=http://0.0.0.0:8081
- SCHEMA_REGISTRY_DEBUG=true

Kafka connect cannot find broker in docker

I have a docker-compose file to set up a Kafka infrastructure. The problem is, when I run the docker-compose for the first time on a machine then all the containers run fine, but when I STOP/REMOVE the containers and re-run the docker-compose then image container "cnfldemos/cp-server-connect-datagen:0.2.0-5.4.0" get an exit sometime. I just wanted to update that this problem is not constant and I cannot figure out the root cause.
Could you please help me with the above issue with image container "cnfldemos/cp-server-connect-datagen:0.2.0-5.4.0". By looking at the logs, I understand that there is an unknown host expectation but I don't know why it is trying to find 172.18.0.4 IP. It's not even my IP.
p.s. I am a novice at Kafka docker setup, please help!
Following are the error logs from the cnfldemos/cp-server-connect-datagen:0.2.0-5.4.0 container.
main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka version: 5.4.0-ccs
[main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka commitId: f4201a82bea68cc7
[main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka startTimeMs: 1585904457838
[kafka-admin-client-thread | adminclient-1] WARN org.apache.kafka.clients.NetworkClient - [AdminClient clientId=adminclient-1] Connection to node -1 (broker/172.18.0.4:29092) could not be established. Broker may not be available.
[kafka-admin-client-thread | adminclient-1] WARN org.apache.kafka.clients.NetworkClient - [AdminClient clientId=adminclient-1] Connection to node -1 (broker/172.18.0.4:29092) could not be established. Broker may not be available.
[kafka-admin-client-thread | adminclient-1] WARN org.apache.kafka.clients.NetworkClient - [AdminClient clientId=adminclient-1] Connection to node -1 (broker/172.18.0.4:29092) could not be established. Broker may not be available.
[kafka-admin-client-thread | adminclient-1] WARN org.apache.kafka.clients.NetworkClient - [AdminClient clientId=adminclient-1] Error connecting to node broker:29092 (id: -1 rack: null)
java.net.UnknownHostException: broker: Name or service not known
at java.net.Inet4AddressImpl.lookupAllHostAddr(Native Method)
at java.net.InetAddress$2.lookupAllHostAddr(InetAddress.java:929)
at java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1324)
at java.net.InetAddress.getAllByName0(InetAddress.java:1277)
at java.net.InetAddress.getAllByName(InetAddress.java:1193)
at java.net.InetAddress.getAllByName(InetAddress.java:1127)
at org.apache.kafka.clients.ClientUtils.resolve(ClientUtils.java:104)
at org.apache.kafka.clients.ClusterConnectionStates$NodeConnectionState.currentAddress(ClusterConnectionStates.java:403)
at org.apache.kafka.clients.ClusterConnectionStates$NodeConnectionState.access$200(ClusterConnectionStates.java:363)
at org.apache.kafka.clients.ClusterConnectionStates.currentAddress(ClusterConnectionStates.java:151)
at org.apache.kafka.clients.NetworkClient.initiateConnect(NetworkClient.java:955)
at org.apache.kafka.clients.NetworkClient.ready(NetworkClient.java:289)
at org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.sendEligibleCalls(KafkaAdminClient.java:969)
at org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.run(KafkaAdminClient.java:1184)
at java.lang.Thread.run(Thread.java:748)
[kafka-admin-client-thread | adminclient-1] WARN org.apache.kafka.clients.NetworkClient - [AdminClient clientId=adminclient-1] Error connecting to node broker:29092 (id: -1 rack: null)
java.net.UnknownHostException: broker
Following is the docker-compose.yml file.
I execute the the following docker-compose.yml file by using command
docker-compose up -d
version: '2.1'
services:
zookeeper:
image: confluentinc/cp-zookeeper:5.4.1
hostname: zookeeper
container_name: zookeeper
ports:
- "2181:2181"
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
broker:
image: confluentinc/cp-server:5.4.1
hostname: broker
container_name: broker
depends_on:
- zookeeper
ports:
- "9092:9092"
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: 'zookeeper:2181'
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://broker:29092,PLAINTEXT_HOST://localhost:9092
KAFKA_METRIC_REPORTERS: io.confluent.metrics.reporter.ConfluentMetricsReporter
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 0
KAFKA_CONFLUENT_LICENSE_TOPIC_REPLICATION_FACTOR: 1
CONFLUENT_METRICS_REPORTER_BOOTSTRAP_SERVERS: broker:29092
CONFLUENT_METRICS_REPORTER_ZOOKEEPER_CONNECT: zookeeper:2181
CONFLUENT_METRICS_REPORTER_TOPIC_REPLICAS: 1
CONFLUENT_METRICS_ENABLE: 'true'
CONFLUENT_SUPPORT_CUSTOMER_ID: 'anonymous'
schema-registry:
image: confluentinc/cp-schema-registry:5.4.1
hostname: schema-registry
container_name: schema-registry
depends_on:
- zookeeper
- broker
ports:
- "8081:8081"
environment:
SCHEMA_REGISTRY_HOST_NAME: schema-registry
SCHEMA_REGISTRY_KAFKASTORE_CONNECTION_URL: 'zookeeper:2181'
rest-proxy:
image: confluentinc/cp-kafka-rest:5.4.1
depends_on:
- zookeeper
- broker
- schema-registry
ports:
- 8082:8082
hostname: rest-proxy
container_name: rest-proxy
environment:
KAFKA_REST_HOST_NAME: rest-proxy
KAFKA_REST_BOOTSTRAP_SERVERS: 'broker:29092'
KAFKA_REST_LISTENERS: "http://0.0.0.0:8082"
KAFKA_REST_SCHEMA_REGISTRY_URL: 'http://schema-registry:8081'
kafka-topics-ui:
image: landoop/kafka-topics-ui:0.9.4
hostname: kafka-topics-ui
ports:
- "8000:8000"
environment:
KAFKA_REST_PROXY_URL: "http://rest-proxy:8082/"
PROXY: "true"
depends_on:
- zookeeper
- broker
- schema-registry
- rest-proxy
connect:
image: cnfldemos/cp-server-connect-datagen:0.2.0-5.4.0
hostname: connect
container_name: connect
depends_on:
- zookeeper
- broker
- schema-registry
ports:
- "8083:8083"
environment:
CONNECT_BOOTSTRAP_SERVERS: 'broker:29092'
CONNECT_REST_ADVERTISED_HOST_NAME: connect
CONNECT_REST_PORT: 8083
CONNECT_GROUP_ID: compose-connect-group
CONNECT_CONFIG_STORAGE_TOPIC: docker-connect-configs
CONNECT_CONFIG_STORAGE_REPLICATION_FACTOR: 1
CONNECT_OFFSET_FLUSH_INTERVAL_MS: 10000
CONNECT_OFFSET_STORAGE_TOPIC: docker-connect-offsets
CONNECT_OFFSET_STORAGE_REPLICATION_FACTOR: 1
CONNECT_STATUS_STORAGE_TOPIC: docker-connect-status
CONNECT_STATUS_STORAGE_REPLICATION_FACTOR: 1
CONNECT_KEY_CONVERTER: org.apache.kafka.connect.storage.StringConverter
CONNECT_VALUE_CONVERTER: io.confluent.connect.avro.AvroConverter
CONNECT_VALUE_CONVERTER_SCHEMA_REGISTRY_URL: http://schema-registry:8081
CONNECT_INTERNAL_KEY_CONVERTER: "org.apache.kafka.connect.json.JsonConverter"
CONNECT_INTERNAL_VALUE_CONVERTER: "org.apache.kafka.connect.json.JsonConverter"
CONNECT_ZOOKEEPER_CONNECT: 'zookeeper:2181'
CLASSPATH: /usr/share/java/monitoring-interceptors/monitoring-interceptors-5.4.1.jar
CONNECT_PRODUCER_INTERCEPTOR_CLASSES: "io.confluent.monitoring.clients.interceptor.MonitoringProducerInterceptor"
CONNECT_CONSUMER_INTERCEPTOR_CLASSES: "io.confluent.monitoring.clients.interceptor.MonitoringConsumerInterceptor"
CONNECT_PLUGIN_PATH: "/usr/share/java,/usr/share/confluent-hub-components"
CONNECT_LOG4J_LOGGERS: org.apache.zookeeper=ERROR,org.I0Itec.zkclient=ERROR,org.reflections=ERROR
docker-compose ps
Name Command State Ports
----------------------------------------------------------------------------------------------------------
broker /etc/confluent/docker/run Up 0.0.0.0:9092->9092/tcp
connect /etc/confluent/docker/run Up 0.0.0.0:8083->8083/tcp, 9092/tcp
topics-ui /run.sh Up 0.0.0.0:8000->8000/tcp
rest-proxy /etc/confluent/docker/run Up 0.0.0.0:8082->8082/tcp
schema-registry /etc/confluent/docker/run Up 0.0.0.0:8081->8081/tcp
zookeeper /etc/confluent/docker/run Up 0.0.0.0:2181->2181/tcp, 2888/tcp, 3888/tcp
docker-compose ps output after sometime
Name Command State Ports
-------------------------------------------------------------------------------------------------------------
broker /etc/confluent/docker/run Up 0.0.0.0:9092->9092/tcp
connect /etc/confluent/docker/run Exit 137
topics-ui /run.sh Up 0.0.0.0:8000->8000/tcp
rest-proxy /etc/confluent/docker/run Up 0.0.0.0:8082->8082/tcp
schema-registry /etc/confluent/docker/run Up 0.0.0.0:8081->8081/tcp
zookeeper /etc/confluent/docker/run Up 0.0.0.0:2181->2181/tcp, 2888/tcp, 3888/tcp
we can notice that container 'connect' is exit 137 state.
I don't know why it is trying to find 172.18.0.4 IP. It's not even my IP.
That is your hypervisor IP.
Overall, seems like you might be running into a memory problem because you are connecting to the correct addresses (CONFLUENT_METRICS_REPORTER_BOOTSTRAP_SERVERS could be localhost:29092, actually), however the docker network is becoming unstable. (Run docker-compose rm -sf and docker system prune to get to a clean(er) state)
Solutions
Increase your Docker memory in the settings
Don't run containers you don't need (REST Proxy or Topics UI)

Getting Secure Kafka and Schema registry to talk to each other, as well as spring boot configuration for it

Hey I have been trying to solve this for about two weeks now.
Basically I want kafka to be on SSL and schema registry on HTTPS. No kerberos to be used.
I have a two spring services, one is a producer and the other a consumer (avro)
this is my current docker-compose, with it when I send a request to my producer it doesn't throw any errors in the application, the request times out but Kafka logs show kafka_1 | [2019-12-03 09:53:27,454] INFO [SocketServer brokerId=1] Failed authentication with /172.18.0.1 (SSL handshake failed) (org.apache.kafka.common.network.Selector)
when I uncomment the lines in the docker-compose i get PKIX path building failed and some other error specifying that Avro cannot be serialized or something like that
zookeeper:
image: confluentinc/cp-zookeeper:5.3.0
ports:
- 2181:2181
environment:
ZOOKEEPER_CLIENT_PORT: "2181"
ZOOKEEPER_TICK_TIME: "2000"
kafka:
image: confluentinc/cp-kafka:5.3.0
ports:
- 29094:29094
environment:
KAFKA_BROKER_ID: 1
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,SSL:SSL
KAFKA_SECURITY_PROTOCOL: SSL
KAFKA_INTER_BROKER_PROTOCOL: SSL
KAFKA_INTER_BROKER_LISTENER_NAME: SSL
KAFKA_LISTENERS: SSL://kafka:29094,PLAINTEXT://kafka:9092
KAFKA_ADVERTISED_LISTENERS: SSL://kafka:29094,PLAINTEXT://kafka:9092
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_AUTO_CREATE_TOPICS_ENABLE: "true"
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
# KAFKA_SSL_CLIENT_AUTH: required
KAFKA_SSL_KEYSTORE_FILENAME: kafka.server.keystore.jks
KAFKA_SSL_TRUSTSTORE_FILENAME: kafka.server.truststore.jks
KAFKA_SSL_KEY_CREDENTIALS: key_credential
KAFKA_SSL_KEYSTORE_CREDENTIALS: key_credential
KAFKA_SSL_KEYSTORE_LOCATION: /etc/kafka/secrets/kafka.server.keystore.jks
KAFKA_SSL_KEYSTORE_PASSWORD: PASSWORD
KAFKA_SSL_KEY_PASSWORD: PASSWORD
KAFKA_SSL_TRUSTSTORE_LOCATION: /etc/kafka/secrets/kafka.server.truststore.jks
KAFKA_SSL_TRUSTSTORE_PASSWORD: PASSWORD
KAFKA_SSL_TRUSTSTORE_CREDENTIALS: key_credential
KAFKA_HEAP_OPTS: -Xmx456M
KAFKA_SSL_ENDPOINT_IDENTIFICATION_ALGORITHM: ""
# KAFKA_AUTHORIZER_CLASS_NAME: kafka.security.auth.SimpleAclAuthorizer
KAFKA_ALLOW_EVERYONE_IF_NO_ACL_FOUND: "true"
KAFKA_SUPER_USERS: User:CN=Kafka-domain
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /home/xubu/Documents:/etc/kafka/secrets
schema-registry:
image: confluentinc/cp-schema-registry:5.3.0
depends_on:
- zookeeper
- kafka
ports:
- 8181:8181
- 8085:8085
- 8086:8086
environment:
SCHEMA_REGISTRY_HOST_NAME: schema-registry
SCHEMA_REGISTRY_LISTENERS: http://schema-registry:8085, https://schema-registry:8086
SCHEMA_REGISTRY_KAFKASTORE_CONNECTION_URL: zookeeper:2181
SCHEMA_REGISTRY_KAFKASTORE_SECURITY_PROTOCOL: SSL
SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS: SSL://kafka:29094
SCHEMA_REGISTRY_KAFKASTORE_SSL_TRUSTSTORE_LOCATION: /etc/kafka/client/kafka.client.truststore.jks
SCHEMA_REGISTRY_KAFKASTORE_SSL_TRUSTSTORE_PASSWORD: PASSWORD
SCHEMA_REGISTRY_KAFKASTORE_SSL_KEYSTORE_LOCATION: /etc/kafka/client/kafka.client.keystore.jks
SCHEMA_REGISTRY_KAFKASTORE_SSL_KEYSTORE_PASSWORD: PASSWORD
SCHEMA_REGISTRY_KAFKASTORE_SSL_KEY_PASSWORD: PASSWORD
SCHEMA_REGISTRY_KAFKASTORE_SSL_ENDPOINT_IDENTIFICATION_ALGORITHM: ""
SCHEMA_REGISTRY_SSL_TRUSTSTORE_LOCATION: /etc/kafka/client/kafka.client.truststore.jks
SCHEMA_REGISTRY_SSL_TRUSTSTORE_PASSWORD: PASSWORD
SCHEMA_REGISTRY_SSL_KEYSTORE_LOCATION: /etc/kafka/client/kafka.client.keystore.jks
SCHEMA_REGISTRY_SSL_KEYSTORE_PASSWORD: PASSWORD
SCHEMA_REGISTRY_SSL_KEY_PASSWORD: PASSWORD
SCHEMA_REGISTRY_SSL_ENDPOINT_IDENTIFICATION_ALGORITHM: ""
SCHEMA_REGISTRY_SECURITY_PROTOCOL: SSL
SCHEMA_REGISTRY_INTER_INSTANCE_PROTOCOL: "https"
#SCHEMA_REGISTRY_SSL_CLIENT_AUTH: 'true'
volumes:
- /home/xubu/Documents:/etc/kafka/client
- /home/xubu/Documents:/etc/kafka/consumer
Below is part of my spring boot application.yaml
spring:
kafka:
bootstrap-servers: kafka:29094
producer:
key-serializer: org.apache.kafka.common.serialization.StringSerializer
value-serializer: io.confluent.kafka.serializers.KafkaAvroSerializer
ssl:
key-store-location: /home/xubu/Documents/kafka.client.keystore.jks
key-password: PASSWORD
key-store-password: PASSWORD
trust-store-location: /home/xubu/Documents/kafka.client.truststore.jks
trust-store-password: PASSWORD
protocol: SSL
properties:
value:
subject:
name:
strategy: io.confluent.kafka.serializers.subject.RecordNameStrategy
value-serializer: io.confluent.kafka.serializers.KafkaAvroSerializer
ssl.endpoint.identification.algorithm: https
schema.registry.url: https://schema-registry:8086
ssl:
trust-store-location: /home/xubu/Documents/kafka.client.truststore.jks
trust-store-password: PASSWORD
key-store-location: /home/xubu/Documents/kafka.client.keystore.jks
key-store-password: PASSWORD
key-password: PASSWORD
protocol: SSL
key-store-type: jks
trust-store-type: jks
This has been my pain for the last two or so weeks, and it's an intro into trying out Access Control Lists on schema-registry

KafkaProducer cannot be created due to missing java.security.auth.login.config

I am attempting to create a KafkaProducer using the akka-stream-kafka library.
INFRASTRUCTURE
Uses docker-compose, showing the kafka and zookeeper instances only.
version: '2'
services:
zookeeper:
image: confluentinc/cp-zookeeper:5.1.0
hostname: zookeeper
container_name: zookeeper
ports:
- "2181:2181"
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
broker:
image: confluentinc/cp-enterprise-kafka:5.1.0
hostname: broker
container_name: broker
depends_on:
- zookeeper
ports:
- "9092:9092"
- "29092:29092"
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: 'zookeeper:2181'
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://broker:9092,PLAINTEXT_HOST://localhost:29092
KAFKA_METRIC_REPORTERS: io.confluent.metrics.reporter.ConfluentMetricsReporter
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 0
CONFLUENT_METRICS_REPORTER_BOOTSTRAP_SERVERS: broker:9092
CONFLUENT_METRICS_REPORTER_ZOOKEEPER_CONNECT: zookeeper:2181
CONFLUENT_METRICS_REPORTER_TOPIC_REPLICAS: 1
CONFLUENT_METRICS_ENABLE: 'true'
CONFLUENT_SUPPORT_CUSTOMER_ID: 'anonymous'
I can report that I have been able to connect to the cluster using a kafka-console-consumer, kafka-console-producer on the CLI and the REST API with no issues.
CONFIG
This is my typesafe config, I attempt to use a plaintext connection to connect to the client. I am trying to connect to the kafka broker without any authentication.
bootstrap.servers="localhost:29092"
acks = "all"
retries = 2
batch.size = 16384
linger.ms = 1
buffer.memory = 33554432
max.block.ms = 5000
CODE
val config = ConfigFactory.load().getConfig("akka.kafka.producer")
val stringSerializer = new StringSerializer()
ProducerSettings[String, String](config, stringSerializer, stringSerializer)
// some code has been omitted here.
Producer.plainSink(producerSettings)
EXCEPTION
This is the stacktrace that I receive, it tells me that there is no jaas config:
org.apache.kafka.common.KafkaException: Failed to construct kafka producer
at org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:456)
at org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:318)
at akka.kafka.ProducerSettings.createKafkaProducer(ProducerSettings.scala:226)
at akka.kafka.scaladsl.Producer$.$anonfun$flexiFlow$1(Producer.scala:155)
at akka.kafka.internal.ProducerStage$DefaultProducerStage.createLogic(ProducerStage.scala:41)
at akka.kafka.internal.ProducerStage$DefaultProducerStage.createLogic(ProducerStage.scala:33)
at akka.stream.stage.GraphStage.createLogicAndMaterializedValue(GraphStage.scala:93)
at akka.stream.impl.GraphStageIsland.materializeAtomic(PhasedFusingActorMaterializer.scala:630)
at akka.stream.impl.PhasedFusingActorMaterializer.materialize(PhasedFusingActorMaterializer.scala:450)
at akka.stream.impl.PhasedFusingActorMaterializer.materialize(PhasedFusingActorMaterializer.scala:415)
Caused by: java.lang.IllegalArgumentException: Could not find a 'KafkaClient' entry in the JAAS configuration. System property 'java.security.auth.login.config' is not set
at org.apache.kafka.common.security.JaasContext.defaultContext(JaasContext.java:133)
at org.apache.kafka.common.security.JaasContext.load(JaasContext.java:98)
at org.apache.kafka.common.security.JaasContext.loadClientContext(JaasContext.java:84)
at org.apache.kafka.common.network.ChannelBuilders.create(ChannelBuilders.java:119)
at org.apache.kafka.common.network.ChannelBuilders.clientChannelBuilder(ChannelBuilders.java:65)
at org.apache.kafka.clients.ClientUtils.createChannelBuilder(ClientUtils.java:88)
at org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:413)
at org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:318)
at akka.kafka.ProducerSettings.createKafkaProducer(ProducerSettings.scala:226)
at akka.kafka.scaladsl.Producer$.$anonfun$flexiFlow$1(Producer.scala:155)
How can I connect to the Kafka Cluster using no auth as required to run locally?
I have tried adding KAFKA_OPTS as an environment variable to the kafka service in docker-compose as well as adding it to the application.conf.
sasl.jaas.config="org.apache.kafka.common.security.plain.PlainLoginModule required username='confluent' password='confluent-secret';"
In the former case, some associated services such as the kafka-rest API failed. In the latter case, I get the following exception:
org.apache.kafka.common.KafkaException: Failed to construct kafka producer
at org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:456)
at org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:318)
at akka.kafka.ProducerSettings.createKafkaProducer(ProducerSettings.scala:226)
at akka.kafka.scaladsl.Producer$.$anonfun$flexiFlow$1(Producer.scala:155)
at akka.kafka.internal.ProducerStage$DefaultProducerStage.createLogic(ProducerStage.scala:41)
at akka.kafka.internal.ProducerStage$DefaultProducerStage.createLogic(ProducerStage.scala:33)
at akka.stream.stage.GraphStage.createLogicAndMaterializedValue(GraphStage.scala:93)
at akka.stream.impl.GraphStageIsland.materializeAtomic(PhasedFusingActorMaterializer.scala:630)
at akka.stream.impl.PhasedFusingActorMaterializer.materialize(PhasedFusingActorMaterializer.scala:450)
at akka.stream.impl.PhasedFusingActorMaterializer.materialize(PhasedFusingActorMaterializer.scala:415)
Caused by: org.apache.kafka.common.KafkaException: java.lang.IllegalArgumentException: No serviceName defined in either JAAS or Kafka config
at org.apache.kafka.common.network.SaslChannelBuilder.configure(SaslChannelBuilder.java:125)
at org.apache.kafka.common.network.ChannelBuilders.create(ChannelBuilders.java:140)
at org.apache.kafka.common.network.ChannelBuilders.clientChannelBuilder(ChannelBuilders.java:65)
at org.apache.kafka.clients.ClientUtils.createChannelBuilder(ClientUtils.java:88)
at org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:413)
at org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:318)
at akka.kafka.ProducerSettings.createKafkaProducer(ProducerSettings.scala:226)
at akka.kafka.scaladsl.Producer$.$anonfun$flexiFlow$1(Producer.scala:155)
at akka.kafka.internal.ProducerStage$DefaultProducerStage.createLogic(ProducerStage.scala:41)
at akka.kafka.internal.ProducerStage$DefaultProducerStage.createLogic(ProducerStage.scala:33)
Caused by: java.lang.IllegalArgumentException: No serviceName defined in either JAAS or Kafka config
at org.apache.kafka.common.security.kerberos.KerberosLogin.getServiceName(KerberosLogin.java:297)
at org.apache.kafka.common.security.kerberos.KerberosLogin.configure(KerberosLogin.java:87)
at org.apache.kafka.common.security.authenticator.LoginManager.<init>(LoginManager.java:52)
at org.apache.kafka.common.security.authenticator.LoginManager.acquireLoginManager(LoginManager.java:89)
at org.apache.kafka.common.network.SaslChannelBuilder.configure(SaslChannelBuilder.java:114)
at org.apache.kafka.common.network.ChannelBuilders.create(ChannelBuilders.java:140)
at org.apache.kafka.common.network.ChannelBuilders.clientChannelBuilder(ChannelBuilders.java:65)
at org.apache.kafka.clients.ClientUtils.createChannelBuilder(ClientUtils.java:88)
at org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:413)
at org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:318)