How to securely provide ssl keystore and key passwords - apache-kafka

I use ssl keystores for Jetty 9 and Kafka. I need to provide keystore and key passwords to access the keystore and private key. However, I don't want to provide these passwords in clear text in the configuration files. What other options are there to securely provide/encrypt the passwords? what is the pros and cons of each approach?

Since Kafka 2.0.0, all password configs can be preloaded in zookeeper before you start brokers. The kafka-configs.sh tool can be used to store passwords in an encrypted format in Zookeeper avoiding the need to specify them in plaintext in the properties file.
See the Updating Broker Configs section in the Kafka docs, especially the "Updating Password Configs in ZooKeeper Before Starting Brokers" paragraph.

Yes. It is mandatory to add password encoder in server.properties,
otherwise server can not decode password.
It works for me when I add password.encoder.secret to server.properties.

Related

How to enable SASL mechanism in kafka locally

How to enable SASL mechanism with JAAS Authentication for kafka ? thus the consumer/producer have to provide username & password in order to be able to publish in the broker
The process of enabling SASL authentication in Kafka is extensively described in the Authentication using SASL section in the documentation. I suggest you follow the official documentation as it contains instructions for all the mechanisms and recommendations for production environments.
To give a bit of background, at a glance you need to:
Create a JAAS file for brokers with a KafkaServer block and the configuration for the specific mechanism.
Add -Djava.security.auth.login.config=<PATH_TO_JAAS_FILE> to your broker JVM command line argument.
Configure client to use SASL via the security.protocol, sasl.mechanism and sasl.jaas.config settings.

Select proper KafkaUser authentication type?

Maybe I miss something, if so forgive my ignorance.
Here what we have:
We use TLS authentication listeners in Kafka cluster (this can be changed, we can add new type of listeners).
When connect to Kafka topic from Java code I use SSL certificate generated for the Kafka user.
If I decide to avoid using SSL certificate, because of 2 reasons:
I will connect to Kafka topic only from trusted OpenShift cluster PODs
To avoid updating on producer/consumer side re-generareated yearly user's SSL certificate (because Kafka generates user certificate 1 year valid period)
Would be the SCRAM-SHA-512 authentication type for KafkaUser a better (and the only ?) choice for the two reasons above? Or SCRAM-SHA-512 also requires SSL certificates?
Another approach I saw was no authentication, but I am not sure how can ACL be used for such users? How I pass to server information which user is connecting. Is it possible to use both ACL and not authenticated by SSL certificate or by password Kafka user?
[UPD] Environment is built on Strimzi (Apache Kafka cluster in OpenShift)
Using SCRAM-SHA-512 does not require TLS. So you can just disable the TLS encryption in the Kafka custom resource (.spec.kafka.listeners -> set tls: false), enable he SCRAM-SHA-512 authentication (same place, in the authentication section). And then you just use the KafkaUser to create the user and get the password.
In general, TLS encryption is normally always recommended. But the SCRAM-SHA mechanisms do not send the password over the network directly, so using it without encryption should not leak the password. At the end, it is up to you to decide.
Also, just as a sidenote - the certificates are for 1 year by default. You can change it in the Kafka CR.

Zookeeper authentication not working when doing all the configurations

I followed the tutorial of the answer of this question:
Kafka SASL zookeeper authentication
And i setted zookeeper.set.acl=true in the server.propeties, but i still can access the zookeeper on port 2181 and this is available for anyone through the: kafka-topics --zookeeper <server-name>:2181 --list
ps: instead of <server-name> i put the DN of my server.
Authentication enforcement feature has recently been submitted in the ZooKeeper codebase and afaik there's no stable version released yet which supports it.
When you turn on SASL authentication, it will be available, but clients are still able to connect without it. Hence the recommendation is to use ACLs side by side with authentication to prevent non-authenticated user from accessing sensitive data.

Kafka Authentication with SASL - duplicate admin user?

I'm running a distributed Kafka-Broker where the inter-broker-communication is set up with SASL/SSL. For this I adapted the JAAS-Configuration given here. The finished file looks like this:
KafkaServer {
org.apache.kafka.common.security.plain.PlainLoginModule required
username="admin"
password="admin-secret"
user_admin="admin-secret"
user_alice="alice-secret"
security.protocol=SASL_PLAINTEXT
sasl.mechanism=PLAIN;
org.apache.kafka.common.security.scram.ScramLoginModule required;
};
Client {
org.apache.kafka.common.security.plain.PlainLoginModule required
username="admin"
password="admin-secret";
};
I noticed that the "KafkaServer"-section has 2 admin users. I also learned the hard way that I need both, but why is that? I have the feeling I've read (and forgot) the reason a few months ago, but I can't seem to find it anymore.
As per the Apache Kafka documentation, the KafkaServer section is used to configure authentication from this broker to other brokers, as well as for clients and other brokers connecting to this broker. The Client section is used for connecting to Zookeeper.
Since your question is about the KafkaServer section, and you are configuring a SASL/PLAIN authentication mechanism, refer to this part of the Apache Kafka documentation:
This configuration defines two users (admin and alice). The properties username and password in the KafkaServer section are used by the broker to initiate connections to other brokers. In this example, admin is the user for inter-broker communication. The set of properties user_userName defines the passwords for all users that connect to the broker and the broker validates all client connections including those from other brokers using these properties.
In other words, there are two separate cases configured here:
When this broker connects out to other brokers it will use the username and password defined in username and password.
When clients and other brokers connect to this broker, the user_userName entries are used to authenticate these connections, where the username is the userName part of the user_userName key, and the password is the value.
So, in your example, this broker will connect to other brokers with a username of admin and a password of admin-secret because of these two lines:
username="admin"
password="admin-secret"
And, clients and other brokers can connect to this broker either with the username password combo of admin / admin-secret or alice / alice-secret because of these two lines:
user_admin="admin-secret"
user_alice="alice-secret"
If you only are accepting connections from other brokers for inter-broker communication on this listener, they probably are using the user_admin="admin-secret" part of the configuration, and the user_alice="alice-secret" probably is superfluous.

Is it possible to add multiple CAs to kafka truststore

While configuring kafka security using ssl certificates, is it possible to have multiple certification authorities in kafka truststore .jks file ?
When i tried adding multiple CAs to truststore, only one CA can be added with the alias caroot. Looks like kafka supports only one CA at a time. if we can add multiple CAs please provide the steps