How to fix kafka SCRAM authentication failure - apache-kafka

version of confluent platform: 5.4.1
I followed the document and previous question to setup the SCRAM authentication:
https://docs.confluent.io/current/kafka/authentication_sasl/authentication_sasl_scram.html#
kafka SASL/SCRAM Failed authentication
After I modified my configurations, the SASL authentication of zookeeper server is successful but the kafka server is still failed. the below shows the log messages and my related configuration, please help advise on it
zookeeper server output:
[2020-07-18 23:53:42,917] INFO Successfully authenticated client: authenticationID=adminuser; authorizationID=adminuser. (org.apache.zookeeper.server.auth.SaslServerCallbackHandler)
[2020-07-18 23:53:43,143] INFO Setting authorizedID: adminuser (org.apache.zookeeper.server.auth.SaslServerCallbackHandler)
[2020-07-18 23:53:43,143] INFO adding SASL authorization for authorizationID: adminuser (org.apache.zookeeper.server.ZooKeeperServer)
[2020-07-18 23:53:51,162] INFO Successfully authenticated client: authenticationID=adminuser; authorizationID=adminuser. (org.apache.zookeeper.server.auth.SaslServerCallbackHandler)
[2020-07-18 23:53:51,162] INFO Setting authorizedID: adminuser (org.apache.zookeeper.server.auth.SaslServerCallbackHandler)
[2020-07-18 23:53:51,162] INFO adding SASL authorization for authorizationID: adminuser (org.apache.zookeeper.server.ZooKeeperServer)
kafka server error message:
org.apache.kafka.common.errors.DisconnectException: Cancelled fetchMetadata request with correlation id 11 due to node -1 being disconnected
[2020-07-19 00:23:59,921] INFO [SocketServer brokerId=0] Failed authentication with /192.168.20.10 (Unexpected Kafka request of type METADATA during SASL handshake.) (org.apache.kafka.common.network.Selector)
[2020-07-19 00:24:00,095] WARN [Producer clientId=confluent-metrics-reporter] Bootstrap broker 192.168.20.10:9092 (id: -1 rack: null) disconnected (org.apache.kafka.clients.NetworkClient)
[2020-07-19 00:24:00,403] INFO [SocketServer brokerId=0] Failed authentication with /192.168.20.10 (Unexpected Kafka request of type METADATA during SASL handshake.) (org.apache.kafka.common.network.Selector)
[2020-07-19 00:24:00,597] INFO [SocketServer brokerId=0] Failed authentication with /192.168.20.10 (Unexpected Kafka request of type METADATA during SASL handshake.) (org.apache.kafka.common.network.Selector)
[2020-07-19 00:24:00,805] INFO [SocketServer brokerId=0] Failed authentication with /192.168.20.10 (Unexpected Kafka request of type METADATA during SASL handshake.) (org.apache.kafka.common.network.Selector)
zookeeper_server_jaas.conf:
Server {
org.apache.zookeeper.server.auth.DigestLoginModule required
user_adminuser="adminuserpwd";
};
zookeeper.properties:
server.001=192.168.20.10:2888:3888
authProvider.001=org.apache.zookeeper.server.auth.SASLAuthenticationProvider
requireClientAuthScheme=sasl
zookeeper-server-start:
...
export ZK_AUTH_ARGS=$base_dir/../data/zookeeper_server_jaas.conf
exec $base_dir/kafka-run-class $EXTRA_ARGS -Djava.security.auth.login.config=$ZK_AUTH_ARGS org.apache.zookeeper.server.quorum.QuorumPeerMain "$#"
Added user:
bin/kafka-configs --zookeeper 192.168.20.10:2181 --alter --add-config 'SCRAM-SHA-256=[password=adminuserpwd],SCRAM-SHA-512=[password=adminuserpwd]' --entity-type users --entity-name adminuser
bin/kafka-configs --zookeeper 192.168.20.10:2181 --describe --entity-type users --entity-name adminuser
Configs for user-principal 'adminuser' are SCRAM-SHA-512=salt=MTdxamZocWJlY2F2dDFhZGc0dmluZm5hcmo=,stored_key=o21ptVzTVZoR/hafmOgTSYmr2F1TORPo6xDaZGAph+6OncE1pw/AyLRwduCx0Qx97bKoPWmlYShfXtbug6u8kg==,server_key=1B/1/CzPTpMBO9MpfKZb504JFLZUia0D6LatAllSYkrTa8XWbaISDGQ29Yf4UU+jQmo+iQgK0jX+KaV+fUV6XA==,iterations=4096,SCRAM-SHA-256=salt=MWlrZGs5dHd4dDhiZmdqZGxnN2cwOGpuaGs=,stored_key=vSJ83eDvilj4JyQyehPaGmG3EZISRRfo3j8iY8uiWLU=,server_key=Bu/KfHnv6bSay/n4dO/h55O9WLLaAjiLtJQzfpr4cs0=,iterations=4096
kafka_server_jaas.conf:
KafkaServer {
org.apache.kafka.common.security.scram.ScramLoginModule required
username="adminuser"
password="adminuserpwd";
};
Client {
org.apache.zookeeper.server.auth.DigestLoginModule required
username="adminuser"
password="adminuserpwd";
};
kafka server.properties:
...
listeners=SASL_PLAINTEXT://192.168.20.10:9092
security.inter.broker.protocol=SASL_PLAINTEXT
sasl.mechanism.inter.broker.protocol=SCRAM-SHA-256
sasl.enabled.mechanisms=SCRAM-SHA-256
advertised.listeners=SASL_PLAINTEXT://192.168.20.10:9092
zookeeper.connect=192.168.20.10:2181
authorizer.class.name=io.confluent.kafka.security.authorizer.ConfluentServerAuthorizer
super.users=User:adminuser
allow.everyone.if.no.acl.found=false
...
kafka-server-start:
...
KAFKA_AUTH_ARGS=$base_dir/../data/kafka_server_jaas.conf
exec $base_dir/kafka-run-class $EXTRA_ARGS -Djava.security.auth.login.config=$KAFKA_AUTH_ARGS io.confluent.support.metrics.SupportedKafka "$#"

Related

Kafka set up 2 authentications SASL_PLAINTEXT and SASL_SSL

I intended to setup 2 authentication modes which are SASL_PLAINTEXT and SASL_SSL. SASL_PLAINTEXT will be used between brokers and zookeeper, and SASL_SSL will be used with external producers and consumers.
I can completely set either one of them, but can't set them both at the same time.
Now Broker can authenticate with Zookeeper, but I can't have Producer to authenticate to Broker via SASL_SSL:9093.
Server.properties
listeners=SASL_PLAINTEXT://172.22.10.21:9092,SASL_SSL://172.22.10.21:9093
advertised.listeners=SASL_PLAINTEXT://172.22.10.21:9092,SASL_SSL://172.22.10.21:9093
ssl.endpoint.identification.algorithm=
ssl.client.auth=required
ssl.truststore.location=/home/aaapi/ssl/kafka.server.truststore.jks
ssl.truststore.password=serversecret
ssl.keystore.location=/home/aaapi/ssl/kafka.server.keystore.jks
ssl.keystore.password=serversecret
ssl.key.password=serversecret
ssl.enabled.protocols=TLSv1.2
security.inter.broker.protocol=SASL_PLAINTEXT
sasl.mechanism.inter.broker.protocol=PLAIN
sasl.enabled.mechanisms=SCRAM-SHA-256,SCRAM-SHA-512,PLAIN
sasl.mechanism=SCRAM-SHA-512 here
server_jaas.conf
sasl_ssl.KafkaServer {
org.apache.kafka.common.security.scram.ScramLoginModule required
username="adminssl"
password="adminssl-secret";
};
sasl_plaintext.KafkaServer {
org.apache.kafka.common.security.plain.PlainLoginModule required
username="admin"
password="admin-secret"
user_admin="admin-secret"
user_kafkabroker1="kafkabroker1-secret";
};
Client {
org.apache.kafka.common.security.plain.PlainLoginModule required
username="admin"
password="admin-secret";
};
zookeeper_jaas.conf
Server {
org.apache.zookeeper.server.auth.DigestLoginModule required
user_admin="admin-secret";
};
client_ssl.properties
security.protocol=SASL_SSL
#bootstrap.servers=172.22.10.21:9093
sasl.mechanism=SCRAM-SHA-512
ssl.enabled.protocols=TLSv1.2
ssl.endpoint.identification.algorithm=
ssl.truststore.location=/home/aaapi/ssl/kafka.client.truststore.jks
ssl.truststore.password=clientsecret
ssl.keystore.location=/home/aaapi/ssl/kafka.server.keystore.jks
ssl.keystore.password=serversecret
ssl.key.password=serversecret
sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required \
username="adminssl" \
password="adminssl-secret";
Error
/opt/kafka/bin/kafka-console-producer.sh --broker-list 172.22.10.21:9093 --topic test1 --producer.config /home/aaapi/client_config/consumer/client_ssl.properties
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/kafka-3.2.1-src/tools/build/dependant-libs-2.13.6/slf4j-reload4j-1.7.36.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/kafka-3.2.1-src/trogdor/build/dependant-libs-2.13.6/slf4j-reload4j-1.7.36.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/kafka-3.2.1-src/connect/runtime/build/dependant-libs/slf4j-reload4j-1.7.36.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/kafka-3.2.1-src/connect/mirror/build/dependant-libs/slf4j-reload4j-1.7.36.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Reload4jLoggerFactory]
>[2022-11-11 19:45:06,787] ERROR [Producer clientId=console-producer] Connection to node -1 (ip-172-22-10-21.ap-southeast-1.compute.internal/172.22.10.21:9093) failed authentication due to: Authentication failed during authentication due to invalid credentials with SASL mechanism SCRAM-SHA-512 (org.apache.kafka.clients.NetworkClient)
[2022-11-11 19:45:06,788] WARN [Producer clientId=console-producer] Bootstrap broker 172.22.10.21:9093 (id: -1 rack: null) disconnected (org.apache.kafka.clients.NetworkClient)
[2022-11-11 19:45:07,388] ERROR [Producer clientId=console-producer] Connection to node -1 (ip-172-22-10-21.ap-southeast-1.compute.internal/172.22.10.21:9093) failed authentication due to: Authentication failed during authentication due to invalid credentials with SASL mechanism SCRAM-SHA-512 (org.apache.kafka.clients.NetworkClient)
[2022-11-11 19:45:07,388] WARN [Producer clientId=console-producer] Bootstrap broker 172.22.10.21:9093 (id: -1 rack: null) disconnected (org.apache.kafka.clients.NetworkClient)
[2022-11-11 19:45:08,323] ERROR [Producer clientId=console-producer] Connection to node -1 (ip-172-22-10-21.ap-southeast-1.compute.internal/172.22.10.21:9093) failed authentication due to: Authentication failed during authentication due to invalid credentials with SASL mechanism SCRAM-SHA-512 (org.apache.kafka.clients.NetworkClient)
[2022-11-11 19:45:08,323] WARN [Producer clientId=console-producer] Bootstrap broker 172.22.10.21:9093 (id: -1 rack: null) disconnected (org.apache.kafka.clients.NetworkClient)
[2022-11-11 19:45:09,724] ERROR [Producer clientId=console-producer] Connection to node -1 (ip-172-22-10-21.ap-southeast-1.compute.internal/172.22.10.21:9093) failed authentication due to: Authentication failed during authentication due to invalid credentials with SASL mechanism SCRAM-SHA-512 (org.apache.kafka.clients.NetworkClient)
[2022-11-11 19:45:09,724] WARN [Producer clientId=console-producer] Bootstrap broker 172.22.10.21:9093 (id: -1 rack: null) disconnected (org.apache.kafka.clients.NetworkClient)
[2022-11-11 19:45:11,149] ERROR [Producer clientId=console-producer] Connection to node -1 (ip-172-22-10-21.ap-southeast-1.compute.internal/172.22.10.21:9093) failed authentication due to: Authentication failed during authentication due to invalid credentials with SASL mechanism SCRAM-SHA-512 (org.apache.kafka.clients.NetworkClient)

Invalid credentials with SASL_PLAINTEXT mechanism SCRAM-SHA-256 - InvalidACL for /config/users/admin

I am using kafka_2.3.0, Ubuntu 16.04
Below are the configurations for Kafka broker and the zookeeper nodes. Currently i am testing this on single machine so the IP shall remain same all over and port shall differ.
kafka-broker 1 configuration.
broker.id=1
listeners=SASL_PLAINTEXT://192.168.1.172:9092
advertised.listeners=SASL_PLAINTEXT://192.168.1.172:9092
log.dirs=/home/emgda/data/kafka/1/kafka-logs
zookeeper.connect=192.168.1.172:2181,192.168.1.172:2182,192.168.1.172:2183
zookeeper.connection.timeout.ms=6000
group.initial.rebalance.delay.ms=0
sasl.enabled.mechanisms=SCRAM-SHA-256
sasl.mechanism.inter.broker.protocol=SCRAM-SHA-256
security.inter.broker.protocol=SASL_PLAINTEXT
listener.name.sasl_ssl.scram-sha-256.sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required username="admin" password="admin-secret";
kafka-broker 2 configuration.
broker.id=2
listeners=SASL_PLAINTEXT://192.168.1.172:9093
advertised.listeners=SASL_PLAINTEXT://192.168.1.172:9093
log.dirs=/home/emgda/data/kafka/2/kafka-logs
zookeeper.connect=192.168.1.172:2181,192.168.1.172:2182,192.168.1.172:2183
zookeeper.connection.timeout.ms=6000
group.initial.rebalance.delay.ms=0
sasl.enabled.mechanisms=SCRAM-SHA-256
sasl.mechanism.inter.broker.protocol=SCRAM-SHA-256
security.inter.broker.protocol=SASL_PLAINTEXT
listener.name.sasl_ssl.scram-sha-256.sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required username="admin" password="admin-secret";
kafka-broker 3 configuration.
broker.id=3
listeners=SASL_PLAINTEXT://192.168.1.172:9094
advertised.listeners=SASL_PLAINTEXT://192.168.1.172:9094
log.dirs=/home/emgda/data/kafka/3/kafka-logs
zookeeper.connect=192.168.1.172:2181,192.168.1.172:2182,192.168.1.172:2183
zookeeper.connection.timeout.ms=6000
group.initial.rebalance.delay.ms=0
sasl.enabled.mechanisms=SCRAM-SHA-256
sasl.mechanism.inter.broker.protocol=SCRAM-SHA-256
security.inter.broker.protocol=SASL_PLAINTEXT
listener.name.sasl_ssl.scram-sha-256.sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required username="admin" password="admin-secret";
kafka_jass.config
KafkaServer {
org.apache.kafka.common.security.scram.ScramLoginModule required
username="admin"
password="admin-secret";
};
Client {
org.apache.zookeeper.server.auth.DigestLoginModule required
username="super"
password="adminsecret";
};
zookeeper_jass.conf
Server {
org.apache.zookeeper.server.auth.DigestLoginModule required
user_super="adminsecret";
};
Zookeeper-node 1 configuration
dataDir=/home/emgda/data/zookeeper/1/
clientPort=2181
server.1=localhost:2666:3666
server.2=localhost:2667:3667
server.3=localhost:2668:3668
requireClientAuthScheme=sasl
Zookeeper-node 2 configuration
dataDir=/home/emgda/data/zookeeper/2/
clientPort=2182
server.1=localhost:2666:3666
server.2=localhost:2667:3667
server.3=localhost:2668:3668
requireClientAuthScheme=sasl
Zookeeper-node 3 configuration
dataDir=/home/emgda/data/zookeeper/3/
clientPort=2183
server.1=localhost:2666:3666
server.2=localhost:2667:3667
server.3=localhost:2668:3668
requireClientAuthScheme=sasl
Zookeeper nodes in cluster start properly and the kafka is also able to authenticate to zookeeper as below zookeeper logs will help understand what heppens when first kafka broker comes up,
[2019-12-30 13:35:29,465] INFO Accepted socket connection from /192.168.1.172:42362 (org.apache.zookeeper.server.NIOServerCnxnFactory)
[2019-12-30 13:35:29,480] INFO Client attempting to establish new session at /192.168.1.172:42362 (org.apache.zookeeper.server.ZooKeeperServer)
[2019-12-30 13:35:29,487] INFO Established session 0x10000d285210003 with negotiated timeout 6000 for client /192.168.1.172:42362 (org.apache.zookeeper.server.ZooKeeperServer)
[2019-12-30 13:35:29,529] INFO Successfully authenticated client: authenticationID=super; authorizationID=super. (org.apache.zookeeper.server.auth.SaslServerCallbackHandler)
[2019-12-30 13:35:29,529] INFO Setting authorizedID: super (org.apache.zookeeper.server.auth.SaslServerCallbackHandler)
[2019-12-30 13:35:29,530] INFO adding SASL authorization for authorizationID: super (org.apache.zookeeper.server.ZooKeeperServer)
[2019-12-30 13:36:54,999] INFO Closed socket connection for client /192.168.1.172:42362 which had sessionid 0x10000d285210003 (org.apache.zookeeper.server.NIOServerCnxn)
Error while starting first Kafka broker as below,
[2019-12-30 13:35:58,417] ERROR [Controller id=1, targetBrokerId=1] Connection to node 1 (192.168.1.172/192.168.1.172:9092) failed authentication due to: Authentication failed during authentication due to invalid credentials with SASL mechanism SCRAM-SHA-256 (org.apache.kafka.clients.NetworkClient)
[2019-12-30 13:35:58,421] INFO [SocketServer brokerId=1] Failed authentication with /192.168.1.172 (Authentication failed during authentication due to invalid credentials with SASL mechanism SCRAM-SHA-256) (org.apache.kafka.common.network.Selector)
While I am trying to create a Kakfa-broker user using below command, I get below error
emgda#ubuntu:~/softwares/kafka_2.12-2.3.0$ ./bin/kafka-configs.sh --zookeeper 192.168.1.172:2181 --alter --add-config 'SCRAM-SHA-256=[iterations=8192,password=admin-secret],SCRAM-SHA-512=[password=admin-secret]' --entity-type users --entity-name admin
Error while executing config command with args '--zookeeper 192.168.1.172:2181 --alter --add-config SCRAM-SHA-256=[iterations=8192,password=admin-secret],SCRAM-SHA-512=[password=admin-secret] --entity-type users --entity-name admin'
org.apache.zookeeper.KeeperException$InvalidACLException: KeeperErrorCode = InvalidACL for /config/users/admin
at org.apache.zookeeper.KeeperException.create(KeeperException.java:124)
at org.apache.zookeeper.KeeperException.create(KeeperException.java:54)
at kafka.zookeeper.AsyncResponse.maybeThrow(ZooKeeperClient.scala:560)
at kafka.zk.KafkaZkClient.createRecursive(KafkaZkClient.scala:1610)
at kafka.zk.KafkaZkClient.createOrSet$1(KafkaZkClient.scala:357)
at kafka.zk.KafkaZkClient.setOrCreateEntityConfigs(KafkaZkClient.scala:367)
at kafka.zk.AdminZkClient.changeEntityConfig(AdminZkClient.scala:378)
at kafka.zk.AdminZkClient.changeUserOrUserClientIdConfig(AdminZkClient.scala:312)
at kafka.zk.AdminZkClient.changeConfigs(AdminZkClient.scala:276)
at kafka.admin.ConfigCommand$.alterConfig(ConfigCommand.scala:153)
at kafka.admin.ConfigCommand$.processCommandWithZk(ConfigCommand.scala:104)
at kafka.admin.ConfigCommand$.main(ConfigCommand.scala:80)
at kafka.admin.ConfigCommand.main(ConfigCommand.scala)
Question:
What am I missing so that kafka-broker authenticates in a proper way?

Kafka2.2.0 with SASL-SCRAM - SSL peer is not authenticated, returning ANONYMOUS instead

Kafka SSL peer is not authenticated, returning ANONYMOUS instead error when client connecting the brokers SASL port, it allows the connection on PLAINTEXT or SSL ports.
I have kafka 2.2.0 in windows systems with SSL enabled, where the kafka broker plaintext is running on 9092 and SSL on 9093. On top of that, configured the SASL with SCRAM mechanism with listener port as 9094, ending-up with error as mentioned in problem summary while running producer as kafka-console-producer.bat --broker-list localhost:9094 --topic xxx
Here are the SASL configurations, not provided other configuration like basic and SSL
zookeeper.properties
authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenticationProvider
requireClientAuthScheme=sasl
jaasLoginRenew=3600000
server.properties
listeners=PLAINTEXT://0.0.0.0:9092,SSL://0.0.0.0:9093,SASL_SSL://0.0.0.0:9094
advertised.listeners=PLAINTEXT://localhost:9092,SSL://localhost:9093,SASL_SSL://localhost:9094
sasl.enabled.mechanisms=SCRAM-SHA-256
sasl.mechanism.inter.broker.protocol=SCRAM-SHA-256
zookeeper_server_jaas.conf
Server {
org.apache.kafka.common.security.scram.ScramLoginModule required
username="admin"
password="admin-pwd"
user_admin="admin-pwd"
user_other1="other1-pwd"
user_other2="other2-pwd";
};
producer.properties
security.protocol=SSL
kafka_server_jaas.conf
KafkaServer {
org.apache.kafka.common.security.scram.ScramLoginModule required
username="admin"
password="admin-pwd";
};
Client {
org.apache.kafka.common.security.plain.PlainLoginModule required
username="admin"
password="admin-pwd";
};
kafka_client_jaas.conf
KafkaClient {
org.apache.kafka.common.security.scram.ScramLoginModule required
username="admin"
password="admin-pwd";
};
Start the Zookeeper as
SET ZOO_LOG_DIR=C:/Work/kafka_2.11-2.2.0-for-ssl/zookeeper-data
SET KAFKA_HOME=C:/Work/kafka_2.11-2.2.0-for-ssl
set KAFKA_OPTS=-Djava.security.auth.login.config=%KAFKA_HOME%/config/zookeeper_server_jaas.conf
zookeeper-server-start.bat %KAFKA_HOME%/config/zookeeper.properties
Start the kafka as
set KAFKA_HOME=C:/Work/kafka_2.11-2.2.0-for-ssl
set KAFKA_OPTS=-Djava.security.auth.login.config=%KAFKA_HOME%/config/kafka_server_jaas.conf
kafka-server-start.bat %KAFKA_HOME%/config/server.properties
Start the Producer as
SET KAFKA_HOME=C:/Work/kafka_2.11-2.2.0-for-ssl
set KAFKA_OPTS=-Djava.security.auth.login.config=%KAFKA_HOME%/config/kafka_client_jaas.conf
kafka-console-producer.bat --broker-list localhost:9094 --topic xxx
The producer is only working if I use the broker port as 9092. Did I missed something and end-up with mis-configuration. Any inputs?
Updated:
Here is the error while connecting the producer/consumer
[2019-10-14 15:39:42,108] DEBUG [SslTransportLayer channelId=127.0.0.1:9094-127.0.0.1:63848-0 key=sun.nio.ch.SelectionKeyImpl#222a223c] SSL peer is not authenticated, returning ANONYMOUS instead (org.apache.kafka.common.network.SslTransportLayer)
[2019-10-14 15:39:42,108] DEBUG [SslTransportLayer channelId=127.0.0.1:9094-127.0.0.1:63848-0 key=sun.nio.ch.SelectionKeyImpl#222a223c] SSL handshake completed successfully with peerHost '127.0.0.1' peerPort 63848 peerPrincipal 'User:ANONYMOUS' cipherSuite 'TLS_DHE_DSS_WITH_AES_256_CBC_SHA256' (org.apache.kafka.common.network.SslTransportLayer)
[2019-10-14 15:39:42,108] DEBUG Set SASL server state to HANDSHAKE_OR_VERSIONS_REQUEST during authentication (org.apache.kafka.common.security.authenticator.SaslServerAuthenticator)
[2019-10-14 15:39:42,108] DEBUG Handling Kafka request API_VERSIONS during authentication (org.apache.kafka.common.security.authenticator.SaslServerAuthenticator)
[2019-10-14 15:39:42,108] DEBUG Set SASL server state to HANDSHAKE_REQUEST during authentication (org.apache.kafka.common.security.authenticator.SaslServerAuthenticator)
[2019-10-14 15:39:42,108] DEBUG Set SASL server state to FAILED during authentication (org.apache.kafka.common.security.authenticator.SaslServerAuthenticator)
[2019-10-14 15:39:42,108] INFO [SocketServer brokerId=0] Failed authentication with 127.0.0.1/127.0.0.1 (Unexpected Kafka request of type METADATA during SASL handshake.) (org.apache.kafka.common.network.Selector)
I had same problem. Authentication with SASL SCRAM wasn't working on 2.2.x and 2.3.x Kafka versions. On 2.1 it was OK.
In the end I resolved the issue by providing zookeeper chroot path (/kafkaTest) when creating principals:
./kafka-configs --zookeeper zookeeper-01:2181/kafkaTest --alter --add-config 'SCRAM-SHA-256=[password=admin-secret],SCRAM-SHA-512=[password=admin-secret]' --entity-type users --entity-name admin
Seems like when credentials are created in zookeeper root path, Kafka can't find them to validate.
I hope it will solve your issue as well!

Kafka user authentication using SASL_PLAINTEXT

I'm trying to implement security in Kafka to authenticate the clients using username and password. The jaas config file is configured properly. I'm starting zookeeper first and then starting just one kafka node. However kafka fails to start with the below error:
[2017-08-07 13:07:08,029] INFO Registered broker 0 at path /brokers/ids/0 with addresses: EndPoint(localhost,9092,ListenerName(SASL_PLAINTEXT),SASL_PLAINTEXT) (kafka.utils.ZkUtils)
[2017-08-07 13:07:08,035] INFO Kafka version : 0.11.0.0 (org.apache.kafka.common.utils.AppInfoParser) [2017-08-07 13:07:08,036] INFO Kafka commitId : cb8625948210849f (org.apache.kafka.common.utils.AppInfoParser)
[2017-08-07 13:07:08,037] INFO [Kafka Server 0], started (kafka.server.KafkaServer)
[2017-08-07 13:07:08,447] WARN Connection to node 0 terminated during authentication. This may indicate that authentication failed due to invalid credentials. (org.apache.kafka.clients.NetworkClient)
[2017-08-07 13:07:08,554] WARN Connection to node 0 terminated during authentication. This may indicate that authentication failed due to invalid credentials. (org.apache.kafka.clients.NetworkClient)
[2017-08-07 13:07:08,662] WARN Connection to node 0 terminated during authentication. This may indicate that authentication failed due to invalid credentials. (org.apache.kafka.clients.NetworkClient)

Kafka TOPIC_AUTHORIZATION_FAILED

I'm actually working on setting up simple Kafka authentication using SASL Plain Text and add ACL authorization. But I have an issue when I try to consume data.
[main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka version : 0.10.0.0
[main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka commitId : b8642491e78c5a13
[main] WARN org.apache.kafka.clients.NetworkClient - Error while fetching metadata with correlation id 1 : {test-topic=TOPIC_AUTHORIZATION_FAILED}
[main] WARN org.apache.kafka.clients.NetworkClient - Error while fetching metadata with correlation id 2 : {test-topic=TOPIC_AUTHORIZATION_FAILED}
[main] WARN org.apache.kafka.clients.NetworkClient - Error while fetching metadata with correlation id 3 : {test-topic=TOPIC_AUTHORIZATION_FAILED}
[main] WARN org.apache.kafka.clients.NetworkClient - Error while fetching metadata with correlation id 4 : {test-topic=TOPIC_AUTHORIZATION_FAILED}
[main] WARN org.apache.kafka.clients.NetworkClient - Error while fetching metadata with correlation id 5 : {test-topic=TOPIC_AUTHORIZATION_FAILED}
[main] WARN org.apache.kafka.clients.NetworkClient - Error while fetching metadata with correlation id 6 : {test-topic=TOPIC_AUTHORIZATION_FAILED}
[main] WARN org.apache.kafka.clients.NetworkClient - Error while fetching metadata with correlation id 7 : {test-topic=TOPIC_AUTHORIZATION_FAILED}
[main] WARN org.apache.kafka.clients.NetworkClient - Error while fetching metadata with correlation id 8 : {test-topic=TOPIC_AUTHORIZATION_FAILED}
[main] WARN org.apache.kafka.clients.NetworkClient - Error while fetching metadata with correlation id 9 : {test-topic=TOPIC_AUTHORIZATION_FAILED}
[main] WARN org.apache.kafka.clients.NetworkClient - Error while fetching metadata with correlation id 10 : {test-topic=TOPIC_AUTHORIZATION_FAILED}
Next, you can see my configuration files.
server.properties
listeners=SASL_PLAINTEXT://localhost:9092
security.inter.broker.protocol=SASL_PLAINTEXT
sasl.mechanism.inter.broker.protocol=PLAIN
sasl.enabled.mechanisms=PLAIN
broker.id=0
num.network.threads=3
num.io.threads=8
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
log.dirs=/tmp/kafka-logs
num.partitions=1
num.recovery.threads.per.data.dir=1
log.retention.hours=168
log.segment.bytes=1073741824
log.retention.check.interval.ms=300000
zookeeper.connect=localhost:2181
zookeeper.connection.timeout.ms=6000
authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer
producer.properties
security.protocol=SASL_PLAINTEXT
sasl.mechanism=PLAIN
bootstrap.servers=localhost:9092
compression.type=none
consumer.properties
security.protocol=SASL_PLAINTEXT
sasl.mechanism=PLAIN
zookeeper.connect=127.0.0.1:2181
zookeeper.connection.timeout.ms=6000
group.id=test-consumer-group
kafka_server_jaas.conf
KafkaServer {
org.apache.kafka.common.security.plain.PlainLoginModule required
username="admin"
password="admin-secret"
user_admin="admin-secret"
user_alice="alice-secret";
};
KafkaClient {
org.apache.kafka.common.security.plain.PlainLoginModule required
username="alice"
password="alice-secret";
};
Environment variable:
export KAFKA_OPTS="-Djava.security.auth.login.config=/home/user/kafka_2.10-0.10.0.1/kafka_server_jaas.conf"
Commands
Set ACL:
bin/kafka-acls.sh --authorizer kafka.security.auth.SimpleAclAuthorizer --authorizer-properties zookeeper.connect=localhost:2181 --add --allow-principal User:alice --operation All --group test-consumer-group --topic test-topic
start Kafka Server :
./bin/kafka-server-start.sh config/server.properties
Start Producer:
bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test-topic --producer.config=config/producer.properties
Start Consumer:
bin/kafka-console-consumer.sh --new-consumer --zookeeper localhost:2181 --topic test-topic --from-beginning --consumer.config=config/consumer.properties --bootstrap-server=localhost:9092
When I try to start the consumer, I have the issue described above. Also, in the kafka logs, I have this:
[2016-10-22 20:17:14,091] ERROR [KafkaApi-0] Error when handling request {group_id=test-consumer-group} (kafka.server.KafkaApis)
kafka.admin.AdminOperationException: replication factor: 3 larger than available brokers: 1
at kafka.admin.AdminUtils$.assignReplicasToBrokers(AdminUtils.scala:117)
at kafka.admin.AdminUtils$.createTopic(AdminUtils.scala:403)
at kafka.server.KafkaApis.kafka$server$KafkaApis$$createTopic(KafkaApis.scala:629)
at kafka.server.KafkaApis.kafka$server$KafkaApis$$createGroupMetadataTopic(KafkaApis.scala:651)
at kafka.server.KafkaApis$$anonfun$getOrCreateGroupMetadataTopic$1.apply(KafkaApis.scala:657)
at kafka.server.KafkaApis$$anonfun$getOrCreateGroupMetadataTopic$1.apply(KafkaApis.scala:657)
at scala.Option.getOrElse(Option.scala:121)
at kafka.server.KafkaApis.getOrCreateGroupMetadataTopic(KafkaApis.scala:657)
at kafka.server.KafkaApis.handleGroupCoordinatorRequest(KafkaApis.scala:818)
at kafka.server.KafkaApis.handle(KafkaApis.scala:86)
at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:60)
at java.lang.Thread.run(Thread.java:745)
How can I fix this?
Issue fixed by separating jaas client and jaas server.
kafka_server_jaas.conf
KafkaServer {
org.apache.kafka.common.security.plain.PlainLoginModule required
username="admin"
password="admin-secret"
user_admin="admin-secret"
user_alice="alice-secret";
};
kafka_client_jaas.conf
KafkaClient {
org.apache.kafka.common.security.plain.PlainLoginModule required
username="alice"
password="alice-secret";
};
On the same terminal, export jaas server conf file and start kafka broker:
$ export KAFKA_OPTS="-Djava.security.auth.login.config=/home/user/kafka_2.10-0.10.0.1/kafka_server_jaas.conf"
$ ./bin/kafka-server-start.sh config/server.properties
On a client terminal, export client jaas conf file and start consumer:
$ export KAFKA_OPTS="-Djava.security.auth.login.config=/home/user/kafka_2.10-0.10.0.1/kafka_client_jaas.conf"
$ ./bin/kafka-console-consumer.sh --new-consumer --zookeeper localhost:2181 --topic test-topic --from-beginning --consumer.config=config/consumer.properties --bootstrap-server=localhost:9092
If you also want to produce, do this on another terminal window:
$ export KAFKA_OPTS="-Djava.security.auth.login.config=/home/user/kafka_2.10-0.10.0.1/kafka_client_jaas.conf"
$ ./bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test-topic --producer.config=config/producer.properties
I have faced similar issue with using the ACLs in Kafka v.0.10. I found this discussion helpful. Especially enabling the authorization log in order to check what is the incoming username for the request and what is it specified in your ACLs.
Firstly check if the server principal admin is provided all the authorization needed. Server principal needs to be allowed to perform all types of authorization on all topics, groups as well as cluster. It's better to declare the admin in the super-users in server.properties file. If this doesn't resolve the issue, then you can enable the authorization log to find out which specimen is being deined for what operation.
Authorization log can be enabled by modifying the log4j.properties in the config folder. In log4j.properties file, change WARN to DEBUG and restart the kafka-servers.
log4j.logger.kafka.authorizer.logger=DEBUG, authorizerAppender
This helped me in sorting out my issue. Hope that helps.
PS: The authorization logs generated will be very lengthy and consume a lot of space. So, remember to turn this off when done with debugging.
Seems you have created a topic with replication factor of 3 but you only have 1 broker running. Try creating a topic with "--replication-factor 1". You might also want to change the default replication factor to be 1 (default.replication.factor in config/server.properties) if you are creating topics automatically.