Kafka Broker - KDC - Kafka Client & Kerberos Authentication - apache-kafka

I am trying to understand the intricacies of the kerberos authentication and validation of ticket from kafka broker perspective. I will summarize the steps.
Kafka client authenticate with KDC Server.
Kafka client get the ticket.
Kafka client publishes the message to the broker.
Kafka broker authenticates the client
My question is , will kafka broker validate the ticket that kafka client sends? What exactly happens at the broker side? How does kafka broker aware that, kafka client has sent the valid non expired ticket?
Regards
Pavan

The question is not specific to Kafka, but is related to generic Kerberos Authentication.
What happens here?
Kafka broker has a service account (keytab or username password) as part of its configuration.
This service account has a SPN (Service Principal Name) assigned to it. Such as HTTP/BROKER_FQDN.COM
Client requests a ticket for the SPN of the broker. KDC knows to which user this SPN is attached. KDC generates a ticket and encrypts it using brokers's service account password, and sends this ticket to the client
Client passes this ticket to the broker.
Broker knows that the ticket is encrypted using its own password, and broker has this password, either in keytab or direct password (based on the configuration).
If the broker successfully decrypts the ticket, then the principal is available to the broker and client is said to be authenticated. The ticket validation etc happens after the ticket is decrypted.
This is basic Kerberos functionality.
You can also check Delegation or impersonation feature of Kerberos which can be used for particular use cases.

Related

Select proper KafkaUser authentication type?

Maybe I miss something, if so forgive my ignorance.
Here what we have:
We use TLS authentication listeners in Kafka cluster (this can be changed, we can add new type of listeners).
When connect to Kafka topic from Java code I use SSL certificate generated for the Kafka user.
If I decide to avoid using SSL certificate, because of 2 reasons:
I will connect to Kafka topic only from trusted OpenShift cluster PODs
To avoid updating on producer/consumer side re-generareated yearly user's SSL certificate (because Kafka generates user certificate 1 year valid period)
Would be the SCRAM-SHA-512 authentication type for KafkaUser a better (and the only ?) choice for the two reasons above? Or SCRAM-SHA-512 also requires SSL certificates?
Another approach I saw was no authentication, but I am not sure how can ACL be used for such users? How I pass to server information which user is connecting. Is it possible to use both ACL and not authenticated by SSL certificate or by password Kafka user?
[UPD] Environment is built on Strimzi (Apache Kafka cluster in OpenShift)
Using SCRAM-SHA-512 does not require TLS. So you can just disable the TLS encryption in the Kafka custom resource (.spec.kafka.listeners -> set tls: false), enable he SCRAM-SHA-512 authentication (same place, in the authentication section). And then you just use the KafkaUser to create the user and get the password.
In general, TLS encryption is normally always recommended. But the SCRAM-SHA mechanisms do not send the password over the network directly, so using it without encryption should not leak the password. At the end, it is up to you to decide.
Also, just as a sidenote - the certificates are for 1 year by default. You can change it in the Kafka CR.

Kafka Authentication with SASL - duplicate admin user?

I'm running a distributed Kafka-Broker where the inter-broker-communication is set up with SASL/SSL. For this I adapted the JAAS-Configuration given here. The finished file looks like this:
KafkaServer {
org.apache.kafka.common.security.plain.PlainLoginModule required
username="admin"
password="admin-secret"
user_admin="admin-secret"
user_alice="alice-secret"
security.protocol=SASL_PLAINTEXT
sasl.mechanism=PLAIN;
org.apache.kafka.common.security.scram.ScramLoginModule required;
};
Client {
org.apache.kafka.common.security.plain.PlainLoginModule required
username="admin"
password="admin-secret";
};
I noticed that the "KafkaServer"-section has 2 admin users. I also learned the hard way that I need both, but why is that? I have the feeling I've read (and forgot) the reason a few months ago, but I can't seem to find it anymore.
As per the Apache Kafka documentation, the KafkaServer section is used to configure authentication from this broker to other brokers, as well as for clients and other brokers connecting to this broker. The Client section is used for connecting to Zookeeper.
Since your question is about the KafkaServer section, and you are configuring a SASL/PLAIN authentication mechanism, refer to this part of the Apache Kafka documentation:
This configuration defines two users (admin and alice). The properties username and password in the KafkaServer section are used by the broker to initiate connections to other brokers. In this example, admin is the user for inter-broker communication. The set of properties user_userName defines the passwords for all users that connect to the broker and the broker validates all client connections including those from other brokers using these properties.
In other words, there are two separate cases configured here:
When this broker connects out to other brokers it will use the username and password defined in username and password.
When clients and other brokers connect to this broker, the user_userName entries are used to authenticate these connections, where the username is the userName part of the user_userName key, and the password is the value.
So, in your example, this broker will connect to other brokers with a username of admin and a password of admin-secret because of these two lines:
username="admin"
password="admin-secret"
And, clients and other brokers can connect to this broker either with the username password combo of admin / admin-secret or alice / alice-secret because of these two lines:
user_admin="admin-secret"
user_alice="alice-secret"
If you only are accepting connections from other brokers for inter-broker communication on this listener, they probably are using the user_admin="admin-secret" part of the configuration, and the user_alice="alice-secret" probably is superfluous.

HTTP error 403 when using Confluent Kafka REST Proxy

I use Confluent Kafka REST Proxy to send messages to Apache Kafka.
I set up basic authentication on the REST Proxy and whenever I submit a HTTP request to the proxy, I get the 403 HTTP Error !role.
The proxy requires Zookeeper, Kafka and Schema Registry to be running. I didn't configure any security on these services.
Without authentication, the proxy works and delivers messages to Kafka successfully.
How to I troubleshoot this problem? I spent multiple hours on that problem and I still can't fix it.
Check following:
Firewall allow the service or port
Is there any antivirus block the service or port
Rights given on kafka, confluent folder & respective log directory to kafka user.

Solace Spring Cloud Stream Binding

How do you initialize a Solace Binder with Spring Cloud Stream where the connection AUTHENTICATION_SCHEME is AUTHENTICATION_SCHEME_GSS_KRB?
solace:
java:
host: tcp://.....
msgVpn: myvpn
client-username: username
apiProperties:
AUTHENTICATION_SCHEME: AUTHENTICATION_SCHEME_GSS_KRB
KRB_SERVICE_NAME: HOST
JaasLoginContext: SolaceGSS
Error Response (403) - No matching configured Authorization Group was found
The error indicates that the Client Authorization is failing. Client Authorization is different from Client Authentication.
Once a client connection to a Message VPN is successfully authenticated, access to the event broker resources and messaging capabilities within that Message VPN must be authorized for the client.
The default authorization method is Internal. It looks like you have set LDAP as the authorization method but there is no matching LDAP group for your client.
You can refer to the Solace documentation for more information on configuring LDAP Authorization.

Kerberos: How does application server decrypt service ticket?

From Kerberos architecture perspective, according to this graph during TGS_REP Client gets a service ticket which is encrypted using TGS session key. After that Client takes the service ticket to Application Server to get a service.
I have seen that some document said that there is no interaction between TGS and application server. So my question is how Application Server decrypt the service ticket without TGS session key to verify the correctness of the service ticket?
The service ticket is encrypted with the server's longterm key known to the KDC.