Kafka SASL_PLAINTEXT with GSSAPI for kerberos - apache-kafka

I am trying to run a single kafka server using SASL and GSSAPI with plaintext but getting below error.
[2018-10-03 16:08:54,220] ERROR [Controller id=0, targetBrokerId=0]
Connection to node 0 failed authentication due to: An error:
(java.security.PrivilegedActionException:
javax.security.sasl.SaslException: GSS initiate failed [Caused by
GSSException: No valid credentials provided]) occurred when evaluating
SASL token received from the Kafka Broker. Kafka Client will go to
AUTHENTICATION_FAILED state. (org.apache.kafka.clients.NetworkClient)
in server.properties changes are:
listeners=SASL_PLAINTEXT://kafka.example.com:9095
security.inter.broker.protocol=SASL_PLAINTEXT
sasl.mechanism.inter.broker.protocol=GSSAPI
advertised.listeners=SASL_PLAINTEXT://kafka.example.com:9095
sasl.enabled.mechanism=GSSAPI
sasl.kerberos.service.name=HTTP
Here is my jaas config:
KafkaServer {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
useTicketCache=true
storeKey=true
keyTab="/home/kafka/kafka_server.keytab"
principal="HTTP/kafka.example.com#UNIX.EXAMPLE.COM";
};
Any leads on how to resolve this?

First of all,
Either use KeyTab (useKeyTab=true) or use TicketCache (useTicketCache=true). Do not use both at once. This may lead to conflicts.
If you have your own Kerb, create a principle for kafka
sudo /usr/sbin/kadmin.local -q 'addprinc -randkey kafka/{hostname}#{REALM}'
sudo /usr/sbin/kadmin.local -q "ktadd -k /etc/security/keytabs/{keytabname}.keytab kafka/{hostname}#{REALM}"
use
sasl.kerberos.service.name="kafka"
Set JVM parameters
export KAFKA_OPTS="-Djava.security.auth.login.config=/path/to/jaas.conf
-Djava.security.krb5.conf=/etc/krb5.conf -Dsun.security.krb5.debug=true"
Hope this would help.

Related

Authenticate Kafka CLI with Kafka running on Confluent

I'm having a Kafka cluster running on Confluent Cloud but I'm not able to reset the commit offset from the UI. Hence, I'm trying to do it via Kafka's CLI as below:
kafka-consumer-groups --bootstrap-server=my_cluster.confluent.cloud:9092 --list
However, I'm bumping into the below error. And I think it has to do with how I can authenticate.
Error: Executing consumer group command failed due to org.apache.kafka.common.KafkaException: Failed to find brokers to send ListGroups
java.util.concurrent.ExecutionException: org.apache.kafka.common.KafkaException: Failed to find brokers to send ListGroups
at java.base/java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:396)
at java.base/java.util.concurrent.CompletableFuture.get(CompletableFuture.java:2073)
at org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:165)
at kafka.admin.ConsumerGroupCommand$ConsumerGroupService.listConsumerGroups(ConsumerGroupCommand.scala:203)
at kafka.admin.ConsumerGroupCommand$ConsumerGroupService.listGroups(ConsumerGroupCommand.scala:198)
at kafka.admin.ConsumerGroupCommand$.run(ConsumerGroupCommand.scala:70)
at kafka.admin.ConsumerGroupCommand$.main(ConsumerGroupCommand.scala:59)
at kafka.admin.ConsumerGroupCommand.main(ConsumerGroupCommand.scala)
Caused by: org.apache.kafka.common.KafkaException: Failed to find brokers to send ListGroups
at org.apache.kafka.clients.admin.KafkaAdminClient$24.handleFailure(KafkaAdminClient.java:3368)
at org.apache.kafka.clients.admin.KafkaAdminClient$Call.handleTimeoutFailure(KafkaAdminClient.java:838)
at org.apache.kafka.clients.admin.KafkaAdminClient$Call.fail(KafkaAdminClient.java:804)
at org.apache.kafka.clients.admin.KafkaAdminClient$TimeoutProcessor.handleTimeouts(KafkaAdminClient.java:934)
at org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.timeoutPendingCalls(KafkaAdminClient.java:1013)
at org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.processRequests(KafkaAdminClient.java:1367)
at org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.run(KafkaAdminClient.java:1331)
at java.base/java.lang.Thread.run(Thread.java:833)
Caused by: org.apache.kafka.common.errors.TimeoutException: Timed out waiting for a node assignment. Call: findAllBrokers
here is an example to list consumer groups
kafka-consumer-groups --bootstrap-server <ccloud kafka>:9092 --command-config consumer.properties --list
consumer.properties
bootstrap.servers=<ccloud kafka>:9092
ssl.endpoint.identification.algorithm=https
security.protocol=SASL_SSL
sasl.mechanism=PLAIN
sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule
required username="<KEY>" password="<SECRET>";
You'll want to use the --command-config option to set properties files that contain your CCLoud credentials

connect to cloudera kafka cluster using kafka connect in redhat client machine

I have to connect to cloudera kafka cluster using kafka connect
kafka cluster apache version => 0.11
cloudera vrsion = cdk 3.1
cloudera cluster is secured with Kerberos
I am able to connect where kafka connect client in centos 7
I am not able to connect where kafka connect client in redhat 8
Error as below
[2021-05-25 04:54:56,855] WARN [AdminClient clientId=adminclient-1] Connection to node -2 (xxxxxxxx:9092) terminated during authentication. This may happen due to any of the following reasons: (1) Authentication failed due to invalid credentials with brokers older than 1.0.0, (2) Firewall blocking Kafka TLS traffic (eg it may only allow HTTPS traffic), (3) Transient network issue. (org.apache.kafka.clients.NetworkClient:769)
cloudera cluster ssl not enabled , runs on 9092 port
then i tried console producer as well , that also gave below error with redhat8 only
[2021-05-29 09:16:31,772] WARN [Producer clientId=console-producer] Connection to node -4 (xxxxk4.domain/xx.xxx.xx.47:9092) terminated during authentication. This may happen due to any of the following reasons: (1) Authentication failed due to invalid credentials with brokers older than 1.0.0, (2) Firewall blocking Kafka TLS traffic (eg it may only allow HTTPS traffic), (3) Transient network issue. (org.apache.kafka.clients.NetworkClient)
[2021-05-29 09:16:31,772] WARN [Producer clientId=console-producer] Bootstrap broker xxxxk4.domain:9092 (id: -4 rack: null) disconnected (org.apache.kafka.clients.NetworkClient)
[2021-05-29 09:16:31,895] WARN [Producer clientId=console-producer] Connection to node -1 (xxxxk1.domain/xx.xxx.xx.44:9092) terminated during authentication. This may happen due to any of the following reasons: (1) Authentication failed due to invalid credentials with brokers older than 1.0.0, (2) Firewall blocking Kafka TLS traffic (eg it may only allow HTTPS traffic), (3) Transient network issue. (org.apache.kafka.clients.NetworkClient)
jaas file for console producer as below
KafkaClient {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
storeKey=true
keyTab="/home/test/keytabs/username.keytab"
principal="user1#DOMAIN.DOMAINaABC.COM";
};
client.property file as below
security.protocol=SASL_PLAINTEXT
sasl.kerberos.service.name=kafka
kafka connect property file as below
consumer.security.protocol=SASL_PLAINTEXT
coonsumer.sasl.jaas.config=com.sun.security.auth.module.Krb5LoginModule required \
useKeyTab=true \
storeKey=true \
keyTab="/home/test/keytabs/user.keytab" \
principal="username#DOMAIN.XXXXXABC.COM";
how we tried
bin/connect-standalone.sh config/connect-standalone.properties default-connectortest.properties
console command as below
sh kafka-console-producer.sh --broker-list namek1.domain.lk:9092,xxxxk2.xx.company.lk:9092, --producer.config /opt/analytics/xxxx/client.properties --topic sample --property print.key=true --property key.separator=" -> "
cat /etc/krb5.conf
# Other applications require this directory to perform krb5 configuration.
includedir /etc/krb5.conf.d/
[libdefaults]
default_realm = DOMAIN.NAME1ABC.COM
dns_lookup_kdc = true
dns_lookup_realm = false
ticket_lifetime = 86400
renew_lifetime = 604800
forwardable = true
default_tgs_enctypes = aes256-cts
default_tkt_enctypes = aes256-cts
permitted_enctypes = aes256-cts
udp_preference_limit = 1
kdc_timeout = 9000
ignore_acceptor_hostname = true
[realms]
NAME1.NAME1ABC.COM = {
kdc = name.DOMAIN.NAME1ABC.COM
admin_server = name2.name1.nameabc.com
}

KeeperErrorCode = InvalidACL when using kafka-configs

I'm trying to setup my kafka cluster to accept SASL_SSL / SCRAM authentication.
First of all, everything is working actually and clients connect using SASL_SSL / GSSAPI. My Zk servers are configured also for SASL authentication and TLS.
I'm using confluent docker images for Kafka and Zk:
confluentinc/cp-kafka:6.0.1
confluentinc/cp-zookeeper:5.5.3-3
So I just modified my setup to allow SCRAM-SHA-512 in Kafka: KAFKA_SASL_ENABLED_MECHANISMS=GSSAPI,SCRAM-SHA-512
Following these instructions I now want to add the users in Zk and this is where the problems start (from the Kafka node):
[root#kafka1 [RCI] ~]# /usr/bin/podman exec kafka kafka-configs --zk-tls-config-file /etc/kafka/secrets/zk-ssl.properties --zookeeper Zk:3181 --alter --entity-type topics --entity-name test_jerome --add-config 'retention.ms=1'
Warning: --zookeeper is deprecated and will be removed in a future version of Kafka.
Use --bootstrap-server instead to specify a broker to connect to.
[2021-01-21 17:44:14,807] WARN zookeeper.ssl.keyStore.location not specified (org.apache.zookeeper.common.X509Util)
Error while executing config command with args '--zk-tls-config-file /etc/kafka/secrets/zk-ssl.properties --zookeeper Zk:3181 --alter --entity-type topics --entity-name test_jerome --add-config retention.ms=1'
org.apache.zookeeper.KeeperException$InvalidACLException: KeeperErrorCode = InvalidACL for /config/changes
at org.apache.zookeeper.KeeperException.create(KeeperException.java:128)
at org.apache.zookeeper.KeeperException.create(KeeperException.java:54)
at kafka.zookeeper.AsyncResponse.maybeThrow(ZooKeeperClient.scala:564)
at kafka.zk.KafkaZkClient.createRecursive(KafkaZkClient.scala:1646)
at kafka.zk.KafkaZkClient.makeSurePersistentPathExists(KafkaZkClient.scala:1568)
at kafka.zk.KafkaZkClient.createConfigChangeNotification(KafkaZkClient.scala:395)
at kafka.zk.AdminZkClient.changeEntityConfig(AdminZkClient.scala:385)
at kafka.zk.AdminZkClient.changeTopicConfig(AdminZkClient.scala:342)
at kafka.zk.AdminZkClient.changeConfigs(AdminZkClient.scala:278)
at kafka.admin.ConfigCommand$.alterConfigWithZk(ConfigCommand.scala:167)
at kafka.admin.ConfigCommand$.processCommandWithZk(ConfigCommand.scala:118)
at kafka.admin.ConfigCommand$.main(ConfigCommand.scala:92)
at kafka.admin.ConfigCommand.main(ConfigCommand.scala)
Error: non zero exit code: 1: OCI runtime error
ZK logs are not really helpful:
[2021-01-21 17:58:08,333] INFO Successfully authenticated client: authenticationID=admin; authorizationID=admin. (org.apache.zookeeper.server.auth.SaslServerCallbackHandler)
[2021-01-21 17:58:08,333] INFO Successfully authenticated client: authenticationID=admin; authorizationID=admin. (org.apache.zookeeper.server.auth.SaslServerCallbackHandler)
[2021-01-21 17:58:08,333] INFO Setting authorizedID: admin (org.apache.zookeeper.server.auth.SaslServerCallbackHandler)
[2021-01-21 17:58:08,333] INFO Setting authorizedID: admin (org.apache.zookeeper.server.auth.SaslServerCallbackHandler)
[2021-01-21 17:58:08,334] INFO adding SASL authorization for authorizationID: admin (org.apache.zookeeper.server.ZooKeeperServer)
[2021-01-21 17:58:08,334] INFO adding SASL authorization for authorizationID: admin (org.apache.zookeeper.server.ZooKeeperServer)
As you can see, the SASL authentication works and the admin user is well identified.
If I login into the Zk shell (from the Zk node), you cann see that the acls are fully open:
getAcl /config/users
'world,'anyone
: cdrwa
getAcl /config
'world,'anyone
: cdrwa
getAcl /
'world,'anyone
: cdrwa
If I create the directory inside the Zk shell (from the Zk node) it works:
create /config/users/topicctl
Created /config/users/topicctl
There are no logs on the Zk server when I do this as I do not authenticate.
I now spend the afternoon on this problem without any progress.
What could be the problem please ?
I finally found my issue thanks to this post: Kafka not starting up if zookeeper.set.acl is set to true
I just added this info to the KAFKA_OPTS env variable: "-Dzookeeper.kerberos.removeHostFromPrincipal=true -Dzookeeper.kerberos.removeRealmFromPrincipal=true -Dzookeeper.authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenticationProvider -Dzookeeper.requireClientAuthScheme=sasl"
My main question is now: why sasl authentication was working before that !

Invalid credentials with SASL_PLAINTEXT mechanism SCRAM-SHA-256 - InvalidACL for /config/users/admin

I am using kafka_2.3.0, Ubuntu 16.04
Below are the configurations for Kafka broker and the zookeeper nodes. Currently i am testing this on single machine so the IP shall remain same all over and port shall differ.
kafka-broker 1 configuration.
broker.id=1
listeners=SASL_PLAINTEXT://192.168.1.172:9092
advertised.listeners=SASL_PLAINTEXT://192.168.1.172:9092
log.dirs=/home/emgda/data/kafka/1/kafka-logs
zookeeper.connect=192.168.1.172:2181,192.168.1.172:2182,192.168.1.172:2183
zookeeper.connection.timeout.ms=6000
group.initial.rebalance.delay.ms=0
sasl.enabled.mechanisms=SCRAM-SHA-256
sasl.mechanism.inter.broker.protocol=SCRAM-SHA-256
security.inter.broker.protocol=SASL_PLAINTEXT
listener.name.sasl_ssl.scram-sha-256.sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required username="admin" password="admin-secret";
kafka-broker 2 configuration.
broker.id=2
listeners=SASL_PLAINTEXT://192.168.1.172:9093
advertised.listeners=SASL_PLAINTEXT://192.168.1.172:9093
log.dirs=/home/emgda/data/kafka/2/kafka-logs
zookeeper.connect=192.168.1.172:2181,192.168.1.172:2182,192.168.1.172:2183
zookeeper.connection.timeout.ms=6000
group.initial.rebalance.delay.ms=0
sasl.enabled.mechanisms=SCRAM-SHA-256
sasl.mechanism.inter.broker.protocol=SCRAM-SHA-256
security.inter.broker.protocol=SASL_PLAINTEXT
listener.name.sasl_ssl.scram-sha-256.sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required username="admin" password="admin-secret";
kafka-broker 3 configuration.
broker.id=3
listeners=SASL_PLAINTEXT://192.168.1.172:9094
advertised.listeners=SASL_PLAINTEXT://192.168.1.172:9094
log.dirs=/home/emgda/data/kafka/3/kafka-logs
zookeeper.connect=192.168.1.172:2181,192.168.1.172:2182,192.168.1.172:2183
zookeeper.connection.timeout.ms=6000
group.initial.rebalance.delay.ms=0
sasl.enabled.mechanisms=SCRAM-SHA-256
sasl.mechanism.inter.broker.protocol=SCRAM-SHA-256
security.inter.broker.protocol=SASL_PLAINTEXT
listener.name.sasl_ssl.scram-sha-256.sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required username="admin" password="admin-secret";
kafka_jass.config
KafkaServer {
org.apache.kafka.common.security.scram.ScramLoginModule required
username="admin"
password="admin-secret";
};
Client {
org.apache.zookeeper.server.auth.DigestLoginModule required
username="super"
password="adminsecret";
};
zookeeper_jass.conf
Server {
org.apache.zookeeper.server.auth.DigestLoginModule required
user_super="adminsecret";
};
Zookeeper-node 1 configuration
dataDir=/home/emgda/data/zookeeper/1/
clientPort=2181
server.1=localhost:2666:3666
server.2=localhost:2667:3667
server.3=localhost:2668:3668
requireClientAuthScheme=sasl
Zookeeper-node 2 configuration
dataDir=/home/emgda/data/zookeeper/2/
clientPort=2182
server.1=localhost:2666:3666
server.2=localhost:2667:3667
server.3=localhost:2668:3668
requireClientAuthScheme=sasl
Zookeeper-node 3 configuration
dataDir=/home/emgda/data/zookeeper/3/
clientPort=2183
server.1=localhost:2666:3666
server.2=localhost:2667:3667
server.3=localhost:2668:3668
requireClientAuthScheme=sasl
Zookeeper nodes in cluster start properly and the kafka is also able to authenticate to zookeeper as below zookeeper logs will help understand what heppens when first kafka broker comes up,
[2019-12-30 13:35:29,465] INFO Accepted socket connection from /192.168.1.172:42362 (org.apache.zookeeper.server.NIOServerCnxnFactory)
[2019-12-30 13:35:29,480] INFO Client attempting to establish new session at /192.168.1.172:42362 (org.apache.zookeeper.server.ZooKeeperServer)
[2019-12-30 13:35:29,487] INFO Established session 0x10000d285210003 with negotiated timeout 6000 for client /192.168.1.172:42362 (org.apache.zookeeper.server.ZooKeeperServer)
[2019-12-30 13:35:29,529] INFO Successfully authenticated client: authenticationID=super; authorizationID=super. (org.apache.zookeeper.server.auth.SaslServerCallbackHandler)
[2019-12-30 13:35:29,529] INFO Setting authorizedID: super (org.apache.zookeeper.server.auth.SaslServerCallbackHandler)
[2019-12-30 13:35:29,530] INFO adding SASL authorization for authorizationID: super (org.apache.zookeeper.server.ZooKeeperServer)
[2019-12-30 13:36:54,999] INFO Closed socket connection for client /192.168.1.172:42362 which had sessionid 0x10000d285210003 (org.apache.zookeeper.server.NIOServerCnxn)
Error while starting first Kafka broker as below,
[2019-12-30 13:35:58,417] ERROR [Controller id=1, targetBrokerId=1] Connection to node 1 (192.168.1.172/192.168.1.172:9092) failed authentication due to: Authentication failed during authentication due to invalid credentials with SASL mechanism SCRAM-SHA-256 (org.apache.kafka.clients.NetworkClient)
[2019-12-30 13:35:58,421] INFO [SocketServer brokerId=1] Failed authentication with /192.168.1.172 (Authentication failed during authentication due to invalid credentials with SASL mechanism SCRAM-SHA-256) (org.apache.kafka.common.network.Selector)
While I am trying to create a Kakfa-broker user using below command, I get below error
emgda#ubuntu:~/softwares/kafka_2.12-2.3.0$ ./bin/kafka-configs.sh --zookeeper 192.168.1.172:2181 --alter --add-config 'SCRAM-SHA-256=[iterations=8192,password=admin-secret],SCRAM-SHA-512=[password=admin-secret]' --entity-type users --entity-name admin
Error while executing config command with args '--zookeeper 192.168.1.172:2181 --alter --add-config SCRAM-SHA-256=[iterations=8192,password=admin-secret],SCRAM-SHA-512=[password=admin-secret] --entity-type users --entity-name admin'
org.apache.zookeeper.KeeperException$InvalidACLException: KeeperErrorCode = InvalidACL for /config/users/admin
at org.apache.zookeeper.KeeperException.create(KeeperException.java:124)
at org.apache.zookeeper.KeeperException.create(KeeperException.java:54)
at kafka.zookeeper.AsyncResponse.maybeThrow(ZooKeeperClient.scala:560)
at kafka.zk.KafkaZkClient.createRecursive(KafkaZkClient.scala:1610)
at kafka.zk.KafkaZkClient.createOrSet$1(KafkaZkClient.scala:357)
at kafka.zk.KafkaZkClient.setOrCreateEntityConfigs(KafkaZkClient.scala:367)
at kafka.zk.AdminZkClient.changeEntityConfig(AdminZkClient.scala:378)
at kafka.zk.AdminZkClient.changeUserOrUserClientIdConfig(AdminZkClient.scala:312)
at kafka.zk.AdminZkClient.changeConfigs(AdminZkClient.scala:276)
at kafka.admin.ConfigCommand$.alterConfig(ConfigCommand.scala:153)
at kafka.admin.ConfigCommand$.processCommandWithZk(ConfigCommand.scala:104)
at kafka.admin.ConfigCommand$.main(ConfigCommand.scala:80)
at kafka.admin.ConfigCommand.main(ConfigCommand.scala)
Question:
What am I missing so that kafka-broker authenticates in a proper way?

Kafka2.2.0 with SASL-SCRAM - SSL peer is not authenticated, returning ANONYMOUS instead

Kafka SSL peer is not authenticated, returning ANONYMOUS instead error when client connecting the brokers SASL port, it allows the connection on PLAINTEXT or SSL ports.
I have kafka 2.2.0 in windows systems with SSL enabled, where the kafka broker plaintext is running on 9092 and SSL on 9093. On top of that, configured the SASL with SCRAM mechanism with listener port as 9094, ending-up with error as mentioned in problem summary while running producer as kafka-console-producer.bat --broker-list localhost:9094 --topic xxx
Here are the SASL configurations, not provided other configuration like basic and SSL
zookeeper.properties
authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenticationProvider
requireClientAuthScheme=sasl
jaasLoginRenew=3600000
server.properties
listeners=PLAINTEXT://0.0.0.0:9092,SSL://0.0.0.0:9093,SASL_SSL://0.0.0.0:9094
advertised.listeners=PLAINTEXT://localhost:9092,SSL://localhost:9093,SASL_SSL://localhost:9094
sasl.enabled.mechanisms=SCRAM-SHA-256
sasl.mechanism.inter.broker.protocol=SCRAM-SHA-256
zookeeper_server_jaas.conf
Server {
org.apache.kafka.common.security.scram.ScramLoginModule required
username="admin"
password="admin-pwd"
user_admin="admin-pwd"
user_other1="other1-pwd"
user_other2="other2-pwd";
};
producer.properties
security.protocol=SSL
kafka_server_jaas.conf
KafkaServer {
org.apache.kafka.common.security.scram.ScramLoginModule required
username="admin"
password="admin-pwd";
};
Client {
org.apache.kafka.common.security.plain.PlainLoginModule required
username="admin"
password="admin-pwd";
};
kafka_client_jaas.conf
KafkaClient {
org.apache.kafka.common.security.scram.ScramLoginModule required
username="admin"
password="admin-pwd";
};
Start the Zookeeper as
SET ZOO_LOG_DIR=C:/Work/kafka_2.11-2.2.0-for-ssl/zookeeper-data
SET KAFKA_HOME=C:/Work/kafka_2.11-2.2.0-for-ssl
set KAFKA_OPTS=-Djava.security.auth.login.config=%KAFKA_HOME%/config/zookeeper_server_jaas.conf
zookeeper-server-start.bat %KAFKA_HOME%/config/zookeeper.properties
Start the kafka as
set KAFKA_HOME=C:/Work/kafka_2.11-2.2.0-for-ssl
set KAFKA_OPTS=-Djava.security.auth.login.config=%KAFKA_HOME%/config/kafka_server_jaas.conf
kafka-server-start.bat %KAFKA_HOME%/config/server.properties
Start the Producer as
SET KAFKA_HOME=C:/Work/kafka_2.11-2.2.0-for-ssl
set KAFKA_OPTS=-Djava.security.auth.login.config=%KAFKA_HOME%/config/kafka_client_jaas.conf
kafka-console-producer.bat --broker-list localhost:9094 --topic xxx
The producer is only working if I use the broker port as 9092. Did I missed something and end-up with mis-configuration. Any inputs?
Updated:
Here is the error while connecting the producer/consumer
[2019-10-14 15:39:42,108] DEBUG [SslTransportLayer channelId=127.0.0.1:9094-127.0.0.1:63848-0 key=sun.nio.ch.SelectionKeyImpl#222a223c] SSL peer is not authenticated, returning ANONYMOUS instead (org.apache.kafka.common.network.SslTransportLayer)
[2019-10-14 15:39:42,108] DEBUG [SslTransportLayer channelId=127.0.0.1:9094-127.0.0.1:63848-0 key=sun.nio.ch.SelectionKeyImpl#222a223c] SSL handshake completed successfully with peerHost '127.0.0.1' peerPort 63848 peerPrincipal 'User:ANONYMOUS' cipherSuite 'TLS_DHE_DSS_WITH_AES_256_CBC_SHA256' (org.apache.kafka.common.network.SslTransportLayer)
[2019-10-14 15:39:42,108] DEBUG Set SASL server state to HANDSHAKE_OR_VERSIONS_REQUEST during authentication (org.apache.kafka.common.security.authenticator.SaslServerAuthenticator)
[2019-10-14 15:39:42,108] DEBUG Handling Kafka request API_VERSIONS during authentication (org.apache.kafka.common.security.authenticator.SaslServerAuthenticator)
[2019-10-14 15:39:42,108] DEBUG Set SASL server state to HANDSHAKE_REQUEST during authentication (org.apache.kafka.common.security.authenticator.SaslServerAuthenticator)
[2019-10-14 15:39:42,108] DEBUG Set SASL server state to FAILED during authentication (org.apache.kafka.common.security.authenticator.SaslServerAuthenticator)
[2019-10-14 15:39:42,108] INFO [SocketServer brokerId=0] Failed authentication with 127.0.0.1/127.0.0.1 (Unexpected Kafka request of type METADATA during SASL handshake.) (org.apache.kafka.common.network.Selector)
I had same problem. Authentication with SASL SCRAM wasn't working on 2.2.x and 2.3.x Kafka versions. On 2.1 it was OK.
In the end I resolved the issue by providing zookeeper chroot path (/kafkaTest) when creating principals:
./kafka-configs --zookeeper zookeeper-01:2181/kafkaTest --alter --add-config 'SCRAM-SHA-256=[password=admin-secret],SCRAM-SHA-512=[password=admin-secret]' --entity-type users --entity-name admin
Seems like when credentials are created in zookeeper root path, Kafka can't find them to validate.
I hope it will solve your issue as well!