Zoo keeper SASL security - apache-kafka

I'm using zookeeper 3.4.12 version and when trying to enable SASL
found below error. Can someone help on this.
Client {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
storeKey=true
useTicketCache=true
keyTab="/tmp/kafka/zookeeper.service.keytab
principal="zookeeper/chefclient.xyz.local#XYZ.LOCAL";
};
Error :
2018-11-02 09:35:01,998] ERROR SASL authentication failed using login context 'Client' with exception: {} (org.apache.zookeeper.client.ZooKeeperSaslClient)
javax.security.sasl.SaslException: Error in authenticating with a Zookeeper Quorum member: the quorum member's saslToken is null

Issue is fixed, spaces in jaas were causing the problem

First step in Zookeeper security is to secure quorum peers communication. Complete explanation here.
Your Zookeeper jaas file should have QuorumServer and QuorumLearner sections.
Next, you can secure communication between Zookeeper cluster and clients as Kafka. Full explanation here
You add a Server section in Zookeeper jaas file and your Kafka jaas file should have a Client section

I think the problem is, you are missing a double quotation mark at
keyTab="/tmp/kafka/zookeeper.service.keytab

I was experiencing the same problem...
SaslException: Error in authenticating with a Zookeeper Quorum member: the quorum member's saslToken is null
This error was also in the Zookeeper Server log:
ERROR [NIOWorkerThread-6:ZooKeeperServer#1191] - cnxn.saslServer is null: cnxn object did not initialize its saslServer properly.
My configuration, using mutual kerberos authentication between zookeeper instances.
The solution
Missing "Server" Section
My problem was that I didn't have the Server section present in my server jaas configuration for Zookeeper.
I need something like:
QuorumServer {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
keyTab="/etc/security/keytabs/zookeeper.keytab"
storeKey=true
useTicketCache=false
debug=false
principal="zookeeper/zk1.example.com#EXAMPLE.COM";
};
QuorumLearner {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
keyTab="/etc/security/keytabs/zookeeper.keytab"
storeKey=true
useTicketCache=false
debug=false
principal="zookeeper/zk2.example.com#EXAMPLE.COM";
};
Server {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
keyTab="/etc/security/keytabs/zookeeper.keytab"
storeKey=true
useTicketCache=false
principal="zookeeper/zk3.example.com#EXAMPLE.COM";
};
When clients connect to Zookeeper they will authenticate against the Server section of this configuration. This is required for SASL to work.
Also make sure you have conf/java.env set with something like:
SERVER_JVMFLAGS="${SERVER_JVMFLAGS} -Djava.security.auth.login.config=/opt/zookeeper/conf/server-jaas.conf"
CLIENT_JVMFLAGS="${CLIENT_JVMFLAGS} -Djava.security.auth.login.config=/opt/zookeeper/conf/client-jaas.conf"

Related

Kafka cluster configuration file jaas-kafka-server.conf

In one our configuration file jaas-kafka-server.conf we have passwords which should be change:
cat jaas-kafka-server.conf
KafkaServer {
org.apache.kafka.common.security.scram.ScramLoginModule required
username="admin"
password="kafka123";
};
Client {
org.apache.zookeeper.server.auth.DigestLoginModule required
username="admin"
password="kafka123";
};
What should be the procedure for changing such passwords to more secure? What is the impact of such operation on the Kafka cluster and applications which are connecting with Kafka?

Could not find a 'KafkaClient' entry in the JAAS configuration. System property 'java.security.auth.login.config' is not set from Kafka rest proxy

I am trying to use kafka rest proxy for AWS MSK cluster.
MSK Encryption details:
Within the cluster
TLS encryption: Enabled
Between clients and brokers
TLS encryption: Enabled
Plaintext: Not enabled
I have created topic "TestTopic" on MSK and then I have created another EC2 instance in the same VPC as MSK to work as Rest proxy. Here are details from kafka-rest.properties:
zookeeper.connect=z-3.msk.xxxx.xx.xxxxxx-1.amazonaws.com:2181,z-1.msk.xxxx.xx.xxxxxx-1.amazonaws.com:2181
bootstrap.servers=b-1.msk.xxxx.xx.xxxxxx-1.amazonaws.com:9096,b-2.msk.xxxx.xx.xxxxxx-1.amazonaws.com:9096
sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required username="username" password="password";
security.protocol=SASL_SSL
sasl.mechanism=SCRAM-SHA-512
ssl.truststore.location=/tmp/kafka.client.truststore.jks
I have also created rest-jaas.properties file with below content:
KafkaClient {
org.apache.kafka.common.security.scram.ScramLoginModule required
username="username"
password="password";
};
and then set the java.security.auth.login.config using:
export KAFKA_OPTS=-Djava.security.auth.login.config=/home/ec2-user/confluent-6.1.1/rest-jaas.properties
After this I started Kafka rest proxy using:
./kafka-rest-start /home/ec2-user/confluent-6.1.1/etc/kafka-rest/kafka-rest.properties
But when I tried to put an event on the TestTopic by calling service from postman:
POST: http://IP_of_ec2instance:8082/topics/TestTopic
I am getting 500 error. But in the EC2 instance I can see error:
Caused by: org.apache.kafka.common.KafkaException: Failed to construct kafka producer
at org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:441)
at org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:291)
at io.confluent.kafkarest.ProducerPool.buildNoSchemaProducer(ProducerPool.java:120)
at io.confluent.kafkarest.ProducerPool.buildBinaryProducer(ProducerPool.java:106)
at io.confluent.kafkarest.ProducerPool.<init>(ProducerPool.java:71)
at io.confluent.kafkarest.ProducerPool.<init>(ProducerPool.java:60)
at io.confluent.kafkarest.ProducerPool.<init>(ProducerPool.java:53)
at io.confluent.kafkarest.DefaultKafkaRestContext.getProducerPool(DefaultKafkaRestContext.java:54)
... 64 more
Caused by: java.lang.IllegalArgumentException: Could not find a 'KafkaClient' entry in the JAAS configuration. System property 'java.security.auth.login.config' is not set
at org.apache.kafka.common.security.JaasContext.defaultContext(JaasContext.java:141)
at org.apache.kafka.common.security.JaasContext.load(JaasContext.java:106)
at org.apache.kafka.common.security.JaasContext.loadClientContext(JaasContext.java:92)
at org.apache.kafka.common.network.ChannelBuilders.create(ChannelBuilders.java:139)
at org.apache.kafka.common.network.ChannelBuilders.clientChannelBuilder(ChannelBuilders.java:74)
at org.apache.kafka.clients.ClientUtils.createChannelBuilder(ClientUtils.java:120)
at org.apache.kafka.clients.producer.KafkaProducer.newSender(KafkaProducer.java:449)
at org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:430)
... 71 more
I can also see that value of sasl.jaas.config = null in the ProducerConfig values.
Could someone please help me with this. Thanks in advance!
Finally the issue was fixed. I am updating the fix here so that it can be beneficial for someone:
kafka-rest.properties file should have below text:
zookeeper.connect=z-3.msk.xxxx.xx.xxxxxx-1.amazonaws.com:2181,z-1.msk.xxxx.xx.xxxxxx-1.amazonaws.com:2181
bootstrap.servers=b-1.msk.xxxx.xx.xxxxxx-1.amazonaws.com:9096,b-2.msk.xxxx.xx.xxxxxx-1.amazonaws.com:9096
client.sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required username="username" password="username";
client.security.protocol=SASL_SSL
client.sasl.mechanism=SCRAM-SHA-512
Neither there was a need to create file rest-jaas.properties nor export KAFKA_OPTS was needed.
After these changes, I was able to put the messages in the kafka topic using scram authentication.

Cannot setup kerberized kafka broker: Error in authenticating with a Zookeeper Quorum member: the quorum member's saslToken is null

So I have been trying to setup kafka broker and zookeeper on a single node with kerberos enabled.
Most of it is based on this tutorial : https://qiita.com/visualskyrim/items/8f48ff107232f0befa5a
System : Ubuntu 18.04
Setup : A zoopeeker instance and a kafka broker process in one EC2 box, A KDC in another EC2 box. Both on the same security group with open ports on UDP 88.
Here is what I have done so far.
Downloaded the kafka broker from here : https://kafka.apache.org/downloads
Created a KDC and correctly generated keytab (verified via kinit -t). Then defined krb5_config file and host entries to the kdc in /etc/hosts file.
Created two jaas configs
cat zookeeper_jaas.conf
Server {
com.sun.security.auth.module.Krb5LoginModule required debug=true
useKeyTab=true
keyTab="/etc/kafka/zookeeper.keytab"
storeKey=true
useTicketCache=false
principal="zookeeper";
};
cat kafka_jaas.conf
cat /etc/kafka/kafka_jaas.conf
KafkaServer {
com.sun.security.auth.module.Krb5LoginModule required debug=true
useKeyTab=true
useTicketCache=false
storeKey=true
keyTab="/etc/kafka/kafka.keytab"
principal="kafka";
};
Client {
com.sun.security.auth.module.Krb5LoginModule required debug=true
useKeyTab=true
storeKey=true
useTicketCache=false
keyTab="/etc/kafka/kafka.keytab"
principal="kafka";
};
Added some lines in the kafka broker config.
The config/zookeeper file has these extra lines added
authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenticationProvider
jaasLoginRenew=3600000
kerberos.removeHostFromPrincipal=true
kerberos.removeRealmFromPrincipal=true
and config/server.properties (config file for broker) has these extra lines added
listeners=SASL_PLAINTEXT://kafka.com:9092
security.inter.broker.protocol=SASL_PLAINTEXT
sasl.mechanism.inter.broker.protocol=GSSAPI
sasl.enabled.mechanism=GSSAPI
sasl.kerberos.service.name=kafka
In one screen session, I do
5. export KAFKA_OPTS="-Djava.security.krb5.conf=/etc/krb5.conf -Djava.security.auth.login.config=/etc/kafka/zookeeper_jaas.conf -Dsun.security.krb5.debug=true"
and then run
bin/zookeeper-server-start.sh config/zookeeper.properties
And this correctly runs, and zookeeper starts up.
In another screen session I do
6. export KAFKA_OPTS="-Djava.security.krb5.conf=/etc/krb5.conf -Djava.security.auth.login.config=/etc/kafka/kafka_jaas.conf -Dsun.security.krb5.debug=true"
and then run
bin/kafka-server-start.sh config/server.properties
But this one fails, with this exception
[2020-02-27 22:56:04,724] ERROR SASL authentication failed using login context 'Client' with
exception: {} (org.apache.zookeeper.client.ZooKeeperSaslClient)
javax.security.sasl.SaslException: Error in authenticating with a Zookeeper Quorum member:
the quorum member's saslToken is null.
at org.apache.zookeeper.client.ZooKeeperSaslClient.createSaslToken(ZooKeeperSaslClient.java:279)
at org.apache.zookeeper.client.ZooKeeperSaslClient.respondToServer(ZooKeeperSaslClient.java:242)
at org.apache.zookeeper.ClientCnxn$SendThread.readResponse(ClientCnxn.java:805)
at org.apache.zookeeper.ClientCnxnSocketNIO.doIO(ClientCnxnSocketNIO.java:94)
at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:366)
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1141)
[2020-02-27 22:56:04,726] ERROR [ZooKeeperClient Kafka server] Auth failed.
(kafka.zookeeper.ZooKeeperClient)
[2020-02-27 22:56:04,842] ERROR Fatal error during KafkaServer startup. Prepare to shutdown
(kafka.server.KafkaServer)
org.apache.zookeeper.KeeperException$AuthFailedException: KeeperErrorCode = AuthFailed for
/consumers
at org.apache.zookeeper.KeeperException.create(KeeperException.java:126)
at org.apache.zookeeper.KeeperException.create(KeeperException.java:54)
at kafka.zookeeper.AsyncResponse.maybeThrow(ZooKeeperClient.scala:560)
at kafka.zk.KafkaZkClient.createRecursive(KafkaZkClient.scala:1610)
at kafka.zk.KafkaZkClient.makeSurePersistentPathExists(KafkaZkClient.scala:1532)
at kafka.zk.KafkaZkClient.$anonfun$createTopLevelPaths$1(KafkaZkClient.scala:1524)
at kafka.zk.KafkaZkClient.$anonfun$createTopLevelPaths$1$adapted(KafkaZkClient.scala:1524)
at scala.collection.immutable.List.foreach(List.scala:392)
at kafka.zk.KafkaZkClient.createTopLevelPaths(KafkaZkClient.scala:1524)
at kafka.server.KafkaServer.initZkClient(KafkaServer.scala:388)
at kafka.server.KafkaServer.startup(KafkaServer.scala:207)
at kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:38)
at kafka.Kafka$.main(Kafka.scala:84)
at kafka.Kafka.main(Kafka.scala)
I also enabled the kerberos debug logs
This was the credentials log for kerberos
DEBUG: ----Credentials----
client: kafka#VISUALSKYRIM
server: zookeeper/localhost#VISUALSKYRIM
ticket: sname: zookeeper/localhost#VISUALSKYRIM
endTime: 1582881662000
----Credentials end----
This implies the Client jaas config is somehow an issue and the issue arises from this Line of Code : https://github.com/apache/zookeeper/blob/master/zookeeper-server/src/main/java/org/apache/zookeeper/client/ZooKeeperSaslClient.java#L310, but I cannot for the life of me figure out why. I cross referenced it with confluent docs and https://docs.confluent.io/2.0.0/kafka/sasl.html and it seems I am doing the right thing. So what gives?
Can anyone help me out with this? Thanks.
Well turns out kafka implicitly believes zookeeper's principal is
zookeeper/localhost
In order to make progress I
Created zookeeper/localhost principal in KDC.
Created a keytab for this called zookeeper-server.keyta
Updated the zookeeper jaas config to be
Server {
com.sun.security.auth.module.Krb5LoginModule required debug=true
useKeyTab=true
keyTab="/etc/kafka/zookeeper-server.keytab"
storeKey=true
useTicketCache=false
principal="zookeeper/localhost";
};
Which now no longer shows this error.
The kafka producer seems to be picking up the SPNs based on my /etc/hosts config
# Replace there keberos KDC server IP with the appropriate IP addresses
172.31.40.220 kerberos.com
127.0.0.1 localhost
Maybe try to look at KAFKA_HOME/config/server.properties and change default localhost to your-host in
zookeeper.connect=localhost:2181
as principal cname was not the same as sname. example:
cname zk/myhost#REALM.MY
sname zookeeper/localhost#REALM.MY
I also ended up using EXTRA_ARGS option -Dzookeeper.sasl.client.username=zk as stated in docs.
Worked for me. Seems like code 1, 2 which should take care of this is ignoring it and uses this property instead.

No JAAS configuration section named 'Server' was foundin '/kafka/kafka_2.12-2.3.0/config/zookeeper_jaas.conf'

when i run the zookeeper from the package in the kakfa_2.12-2.3.0 i am getting the following error
$ export KAFKA_OPTS="-Djava.security.auth.login.config=/kafka/kafka_2.12-2.3.0/config/zookeeper_jaas.conf"
$ ./bin/zookeeper-server-start.sh config/zookeeper.properties
and the zookeeper_jaas.conf is
KafkaServer {
org.apache.kafka.common.security.plain.PlainLoginModule required
username="admin"
password="admin-secret"
user_admin="admin-secret";
};
and the zookeeper.properties file is
server=localhost:9092
#server=localhost:2888:3888
sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username="ibm" password="ibm-secret";
security.protocol=SASL_SSL
sasl.mechanism=PLAIN
ssl.truststore.location=**strong text**/kafka/apache-zookeeper-3.5.5-bin/zookeeperkeys/client.truststore.jks
ssl.truststore.password=test1234
authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenticationProvider
jaasLoginRenew=3600000
requireClientAuthScheme=sasl
can anyone suggest what could be the reason
You seem to have mixed up a bunch of Kafka SASL configuration into your Zookeeper configuration files. Both Zookeeper and Kafka have different SASL support so it's not going to work.
I'm guessing you want to enable SASL authentication between Kafka and Zookeeper. In that case you need to follow the Zookeeper Server-Client guide: https://cwiki.apache.org/confluence/display/ZOOKEEPER/Client-Server+mutual+authentication
Zookeeper does not support SASL Plain, but DigestMD5 is pretty similar. In that case your jaas.conf file should look like:
Server {
org.apache.zookeeper.server.auth.DigestLoginModule required
user_super="adminsecret"
user_bob="bobsecret";
};
Then you need to configure your Kafka brokers to connect to Zookeeper with SASL. You can do that using another jaas.conf file (this time loading it in Kafka):
Client {
org.apache.zookeeper.server.auth.DigestLoginModule required
username="bob"
password="bobsecret";
};
Note: you can also enable SASL between the Zookeeper servers. To do so, follow the Server-Server guide: https://cwiki.apache.org/confluence/display/ZOOKEEPER/Server-Server+mutual+authentication

Kafka Streams: Kerberos ticket renewal

When the Kafka stream app is started, the following jaas file is being used. However, the tickets are not being renewed automatically by the stream application. It fails with the exception below after the ticket expires. What should we do to keep the Kerberos ticket automatically renewed?
KafkaClient {
com.sun.security.auth.module.Krb5LoginModule required
doNotPrompt=true
useTicketCache=true
principal="astvy#DEV.ACM.COM"
useKeyTab=true
serviceName="kafka"
keyTab="/home/astvy/astvy.headless.keytab"
renewTGT=true
client=true;
};
Error
Abort sending since an error caught with a previous record (key ED1812 value org.cox.model.HourlyUnit#83e6c99 timestamp 1536165112061) to topic dub_hourlyunit_source1 due to org.apache.kafka.common.errors.SaslAuthenticationException:
An error: (java.security.PrivilegedActionException: javax.security.sasl.SaslException:
GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)]) occurred when evaluating SASL token received from the Kafka Broker.
Kafka Client will go to AUTHENTICATION_FAILED state.
After making few corrections as below (key change is to set the useTicketCache to false), we have not seen the above issue reoccurring, but as the renewal TGT is set for 7 days, we are continuing to monitor if the issue has been resolved. Will check for few more days and confirm on this, if the following changes addresses the issue permanently.
KafkaClient {
com.sun.security.auth.module.Krb5LoginModule required
doNotPrompt=true
useTicketCache=false
principal="astvy#DEV.ACM.COM"
useKeyTab=true
serviceName="kafka"
keyTab="/home/astvy/astvy.headless.keytab"
storeKey=true;
};
Kafka Streams use Kerberos and SSL just like any other Kafka clients like producer and consumer in the configs, so I cannot really think of any issues inside Streams itself that may cause to not renew ticket.
I did some quick search on Google and one that may be related: https://issues.apache.org/jira/browse/HADOOP-10786, if you are using J8.