Camel: How to set the JaasConfig in the property - apache-kafka

I was trying to set the keytabs via Camel and wasn't able to map it to a property similar to Spring.
https://kafka.apache.org/documentation/#security_client_staticjaas
I was trying to either point the JaasConfig to a file or set individual properties
KafkaClient {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
storeKey=true
keyTab="/etc/security/keytabs/kafka_client.keytab"
principal="kafka-client-1#EXAMPLE.COM";
};

Related

Kafka cluster configuration file jaas-kafka-server.conf

In one our configuration file jaas-kafka-server.conf we have passwords which should be change:
cat jaas-kafka-server.conf
KafkaServer {
org.apache.kafka.common.security.scram.ScramLoginModule required
username="admin"
password="kafka123";
};
Client {
org.apache.zookeeper.server.auth.DigestLoginModule required
username="admin"
password="kafka123";
};
What should be the procedure for changing such passwords to more secure? What is the impact of such operation on the Kafka cluster and applications which are connecting with Kafka?

Cannot setup kerberized kafka broker: Error in authenticating with a Zookeeper Quorum member: the quorum member's saslToken is null

So I have been trying to setup kafka broker and zookeeper on a single node with kerberos enabled.
Most of it is based on this tutorial : https://qiita.com/visualskyrim/items/8f48ff107232f0befa5a
System : Ubuntu 18.04
Setup : A zoopeeker instance and a kafka broker process in one EC2 box, A KDC in another EC2 box. Both on the same security group with open ports on UDP 88.
Here is what I have done so far.
Downloaded the kafka broker from here : https://kafka.apache.org/downloads
Created a KDC and correctly generated keytab (verified via kinit -t). Then defined krb5_config file and host entries to the kdc in /etc/hosts file.
Created two jaas configs
cat zookeeper_jaas.conf
Server {
com.sun.security.auth.module.Krb5LoginModule required debug=true
useKeyTab=true
keyTab="/etc/kafka/zookeeper.keytab"
storeKey=true
useTicketCache=false
principal="zookeeper";
};
cat kafka_jaas.conf
cat /etc/kafka/kafka_jaas.conf
KafkaServer {
com.sun.security.auth.module.Krb5LoginModule required debug=true
useKeyTab=true
useTicketCache=false
storeKey=true
keyTab="/etc/kafka/kafka.keytab"
principal="kafka";
};
Client {
com.sun.security.auth.module.Krb5LoginModule required debug=true
useKeyTab=true
storeKey=true
useTicketCache=false
keyTab="/etc/kafka/kafka.keytab"
principal="kafka";
};
Added some lines in the kafka broker config.
The config/zookeeper file has these extra lines added
authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenticationProvider
jaasLoginRenew=3600000
kerberos.removeHostFromPrincipal=true
kerberos.removeRealmFromPrincipal=true
and config/server.properties (config file for broker) has these extra lines added
listeners=SASL_PLAINTEXT://kafka.com:9092
security.inter.broker.protocol=SASL_PLAINTEXT
sasl.mechanism.inter.broker.protocol=GSSAPI
sasl.enabled.mechanism=GSSAPI
sasl.kerberos.service.name=kafka
In one screen session, I do
5. export KAFKA_OPTS="-Djava.security.krb5.conf=/etc/krb5.conf -Djava.security.auth.login.config=/etc/kafka/zookeeper_jaas.conf -Dsun.security.krb5.debug=true"
and then run
bin/zookeeper-server-start.sh config/zookeeper.properties
And this correctly runs, and zookeeper starts up.
In another screen session I do
6. export KAFKA_OPTS="-Djava.security.krb5.conf=/etc/krb5.conf -Djava.security.auth.login.config=/etc/kafka/kafka_jaas.conf -Dsun.security.krb5.debug=true"
and then run
bin/kafka-server-start.sh config/server.properties
But this one fails, with this exception
[2020-02-27 22:56:04,724] ERROR SASL authentication failed using login context 'Client' with
exception: {} (org.apache.zookeeper.client.ZooKeeperSaslClient)
javax.security.sasl.SaslException: Error in authenticating with a Zookeeper Quorum member:
the quorum member's saslToken is null.
at org.apache.zookeeper.client.ZooKeeperSaslClient.createSaslToken(ZooKeeperSaslClient.java:279)
at org.apache.zookeeper.client.ZooKeeperSaslClient.respondToServer(ZooKeeperSaslClient.java:242)
at org.apache.zookeeper.ClientCnxn$SendThread.readResponse(ClientCnxn.java:805)
at org.apache.zookeeper.ClientCnxnSocketNIO.doIO(ClientCnxnSocketNIO.java:94)
at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:366)
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1141)
[2020-02-27 22:56:04,726] ERROR [ZooKeeperClient Kafka server] Auth failed.
(kafka.zookeeper.ZooKeeperClient)
[2020-02-27 22:56:04,842] ERROR Fatal error during KafkaServer startup. Prepare to shutdown
(kafka.server.KafkaServer)
org.apache.zookeeper.KeeperException$AuthFailedException: KeeperErrorCode = AuthFailed for
/consumers
at org.apache.zookeeper.KeeperException.create(KeeperException.java:126)
at org.apache.zookeeper.KeeperException.create(KeeperException.java:54)
at kafka.zookeeper.AsyncResponse.maybeThrow(ZooKeeperClient.scala:560)
at kafka.zk.KafkaZkClient.createRecursive(KafkaZkClient.scala:1610)
at kafka.zk.KafkaZkClient.makeSurePersistentPathExists(KafkaZkClient.scala:1532)
at kafka.zk.KafkaZkClient.$anonfun$createTopLevelPaths$1(KafkaZkClient.scala:1524)
at kafka.zk.KafkaZkClient.$anonfun$createTopLevelPaths$1$adapted(KafkaZkClient.scala:1524)
at scala.collection.immutable.List.foreach(List.scala:392)
at kafka.zk.KafkaZkClient.createTopLevelPaths(KafkaZkClient.scala:1524)
at kafka.server.KafkaServer.initZkClient(KafkaServer.scala:388)
at kafka.server.KafkaServer.startup(KafkaServer.scala:207)
at kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:38)
at kafka.Kafka$.main(Kafka.scala:84)
at kafka.Kafka.main(Kafka.scala)
I also enabled the kerberos debug logs
This was the credentials log for kerberos
DEBUG: ----Credentials----
client: kafka#VISUALSKYRIM
server: zookeeper/localhost#VISUALSKYRIM
ticket: sname: zookeeper/localhost#VISUALSKYRIM
endTime: 1582881662000
----Credentials end----
This implies the Client jaas config is somehow an issue and the issue arises from this Line of Code : https://github.com/apache/zookeeper/blob/master/zookeeper-server/src/main/java/org/apache/zookeeper/client/ZooKeeperSaslClient.java#L310, but I cannot for the life of me figure out why. I cross referenced it with confluent docs and https://docs.confluent.io/2.0.0/kafka/sasl.html and it seems I am doing the right thing. So what gives?
Can anyone help me out with this? Thanks.
Well turns out kafka implicitly believes zookeeper's principal is
zookeeper/localhost
In order to make progress I
Created zookeeper/localhost principal in KDC.
Created a keytab for this called zookeeper-server.keyta
Updated the zookeeper jaas config to be
Server {
com.sun.security.auth.module.Krb5LoginModule required debug=true
useKeyTab=true
keyTab="/etc/kafka/zookeeper-server.keytab"
storeKey=true
useTicketCache=false
principal="zookeeper/localhost";
};
Which now no longer shows this error.
The kafka producer seems to be picking up the SPNs based on my /etc/hosts config
# Replace there keberos KDC server IP with the appropriate IP addresses
172.31.40.220 kerberos.com
127.0.0.1 localhost
Maybe try to look at KAFKA_HOME/config/server.properties and change default localhost to your-host in
zookeeper.connect=localhost:2181
as principal cname was not the same as sname. example:
cname zk/myhost#REALM.MY
sname zookeeper/localhost#REALM.MY
I also ended up using EXTRA_ARGS option -Dzookeeper.sasl.client.username=zk as stated in docs.
Worked for me. Seems like code 1, 2 which should take care of this is ignoring it and uses this property instead.

No JAAS configuration section named 'Server' was foundin '/kafka/kafka_2.12-2.3.0/config/zookeeper_jaas.conf'

when i run the zookeeper from the package in the kakfa_2.12-2.3.0 i am getting the following error
$ export KAFKA_OPTS="-Djava.security.auth.login.config=/kafka/kafka_2.12-2.3.0/config/zookeeper_jaas.conf"
$ ./bin/zookeeper-server-start.sh config/zookeeper.properties
and the zookeeper_jaas.conf is
KafkaServer {
org.apache.kafka.common.security.plain.PlainLoginModule required
username="admin"
password="admin-secret"
user_admin="admin-secret";
};
and the zookeeper.properties file is
server=localhost:9092
#server=localhost:2888:3888
sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username="ibm" password="ibm-secret";
security.protocol=SASL_SSL
sasl.mechanism=PLAIN
ssl.truststore.location=**strong text**/kafka/apache-zookeeper-3.5.5-bin/zookeeperkeys/client.truststore.jks
ssl.truststore.password=test1234
authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenticationProvider
jaasLoginRenew=3600000
requireClientAuthScheme=sasl
can anyone suggest what could be the reason
You seem to have mixed up a bunch of Kafka SASL configuration into your Zookeeper configuration files. Both Zookeeper and Kafka have different SASL support so it's not going to work.
I'm guessing you want to enable SASL authentication between Kafka and Zookeeper. In that case you need to follow the Zookeeper Server-Client guide: https://cwiki.apache.org/confluence/display/ZOOKEEPER/Client-Server+mutual+authentication
Zookeeper does not support SASL Plain, but DigestMD5 is pretty similar. In that case your jaas.conf file should look like:
Server {
org.apache.zookeeper.server.auth.DigestLoginModule required
user_super="adminsecret"
user_bob="bobsecret";
};
Then you need to configure your Kafka brokers to connect to Zookeeper with SASL. You can do that using another jaas.conf file (this time loading it in Kafka):
Client {
org.apache.zookeeper.server.auth.DigestLoginModule required
username="bob"
password="bobsecret";
};
Note: you can also enable SASL between the Zookeeper servers. To do so, follow the Server-Server guide: https://cwiki.apache.org/confluence/display/ZOOKEEPER/Server-Server+mutual+authentication

Zoo keeper SASL security

I'm using zookeeper 3.4.12 version and when trying to enable SASL
found below error. Can someone help on this.
Client {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
storeKey=true
useTicketCache=true
keyTab="/tmp/kafka/zookeeper.service.keytab
principal="zookeeper/chefclient.xyz.local#XYZ.LOCAL";
};
Error :
2018-11-02 09:35:01,998] ERROR SASL authentication failed using login context 'Client' with exception: {} (org.apache.zookeeper.client.ZooKeeperSaslClient)
javax.security.sasl.SaslException: Error in authenticating with a Zookeeper Quorum member: the quorum member's saslToken is null
Issue is fixed, spaces in jaas were causing the problem
First step in Zookeeper security is to secure quorum peers communication. Complete explanation here.
Your Zookeeper jaas file should have QuorumServer and QuorumLearner sections.
Next, you can secure communication between Zookeeper cluster and clients as Kafka. Full explanation here
You add a Server section in Zookeeper jaas file and your Kafka jaas file should have a Client section
I think the problem is, you are missing a double quotation mark at
keyTab="/tmp/kafka/zookeeper.service.keytab
I was experiencing the same problem...
SaslException: Error in authenticating with a Zookeeper Quorum member: the quorum member's saslToken is null
This error was also in the Zookeeper Server log:
ERROR [NIOWorkerThread-6:ZooKeeperServer#1191] - cnxn.saslServer is null: cnxn object did not initialize its saslServer properly.
My configuration, using mutual kerberos authentication between zookeeper instances.
The solution
Missing "Server" Section
My problem was that I didn't have the Server section present in my server jaas configuration for Zookeeper.
I need something like:
QuorumServer {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
keyTab="/etc/security/keytabs/zookeeper.keytab"
storeKey=true
useTicketCache=false
debug=false
principal="zookeeper/zk1.example.com#EXAMPLE.COM";
};
QuorumLearner {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
keyTab="/etc/security/keytabs/zookeeper.keytab"
storeKey=true
useTicketCache=false
debug=false
principal="zookeeper/zk2.example.com#EXAMPLE.COM";
};
Server {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
keyTab="/etc/security/keytabs/zookeeper.keytab"
storeKey=true
useTicketCache=false
principal="zookeeper/zk3.example.com#EXAMPLE.COM";
};
When clients connect to Zookeeper they will authenticate against the Server section of this configuration. This is required for SASL to work.
Also make sure you have conf/java.env set with something like:
SERVER_JVMFLAGS="${SERVER_JVMFLAGS} -Djava.security.auth.login.config=/opt/zookeeper/conf/server-jaas.conf"
CLIENT_JVMFLAGS="${CLIENT_JVMFLAGS} -Djava.security.auth.login.config=/opt/zookeeper/conf/client-jaas.conf"

Kafka configure jaas using sasl.jaas.config on kubernetes

I'm using this helm chart: https://github.com/helm/charts/tree/master/incubator/kafka
and these overrides in values.yaml
configurationOverrides:
advertised.listeners: |-
EXTERNAL://kafka-${KAFKA_BROKER_ID}.host-removed:$((31090 + ${KAFKA_BROKER_ID}))
listener.security.protocol.map: |-
PLAINTEXT:SASL_PLAINTEXT,EXTERNAL:SASL_PLAINTEXT
sasl.enabled.mechanisms: SCRAM-SHA-256
auto.create.topics.enable: false
inter.broker.listener.name: PLAINTEXT
sasl.mechanism.inter.broker.protocol: SCRAM-SHA-256
listener.name.EXTERNAL.scram-sha-256.sasl.jaas.config: org.apache.kafka.common.security.scram.ScramLoginModule required username="user" password="password";
based on this documentation: https://kafka.apache.org/documentation/#security_jaas_broker
(quick summary)
Brokers may also configure JAAS using the broker configuration property sasl.jaas.config. The property name must be prefixed with the listener prefix including the SASL mechanism, i.e. listener.name.{listenerName}.{saslMechanism}.sasl.jaas.config. Only one login module may be specified in the config value. If multiple mechanisms are configured on a listener, configs must be provided for each mechanism using the listener and mechanism prefix
listener.name.sasl_ssl.scram-sha-256.sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required \
username="admin" \
password="admin-secret";
The problem is that when I start Kafka I get the following error:
java.lang.IllegalArgumentException: Could not find a 'KafkaServer' or 'plaintext.KafkaServer' entry in the JAAS configuration. System property 'java.security.auth.login.config' is not set
According to the order of precedence, it should use the static jass file if the above config is NOT set.
If JAAS configuration is defined at different levels, the order of precedence used is:
Broker configuration property listener.name.{listenerName}.{saslMechanism}.sasl.jaas.config
{listenerName}.KafkaServer section of static JAAS configuration
KafkaServer section of static JAAS configuration
The helm chart doesn't support a way to configure this jaas file so using this property seems to be the desired way, I'm just confused as to what is configured incorrectly.
Note: The cluster works fine if I disable all SASL and just use plain text but that's not much good in a real environment.
We've defined 2 listeners: PLAINTEXT and EXTERNAL. You've mapped both to SASL_PLAINTEXT.
Is this really what you wanted to do? or did you want PLAINTEXT to not require SASL but just be Plaintext?
If you really want both to be SASL, then both of them need a JAAS configuration. In your question, I only see a JAAS configuration for EXTERNAL:
listener.name.EXTERNAL.scram-sha-256.sasl.jaas.config: org.apache.kafka.common.security.scram.ScramLoginModule required username="user" password="password";
As you've mapped PLAINTEXT to SASL_PLAINTEXT, it also requires a JAAS configuration. You can specify it using for example:
listener.name.PLAINTEXT.scram-sha-256.sasl.jaas.config: org.apache.kafka.common.security.scram.ScramLoginModule required username="user" password="password";
If you wanted your PLAINTEXT listener to actually be Plaintext without SASL, then you need to update the listener mapping:
listener.security.protocol.map: |-
PLAINTEXT:PLAINTEXT,EXTERNAL:SASL_PLAINTEXT