Use plain mode to access kafka - apache-kafka

I have a Kafka cluster on OpenShift cloud, a customer outside of cloud send message to Kafka cluster, then we set authentication and authorization for Kafka cluster and topic, ans create an user.
Now from external customer can publish messages to kafka topic via SSL or SASL via port 9094, now app inside OpenShift want to consume messages via port 9092 with plain mode, that means, app inside OpenShift doesn't want to use any authentication or authorization to access that topic, do you think if it is possible?
This is my Kafka cluster listener
kafka:
authorization:
type: simple
listeners:
external:
authentication:
type: tls
port: 9094
tls: true
type: route
plain:
port: 9092
tls: false
type: internal

Related

Is it possible to access Zookeeper in Strimzi Kafka installed with Route listener type on OpenShift?

I have Strimzi Kafka cluster on OpenShift, configured like described here:
https://strimzi.io/blog/2019/04/30/accessing-kafka-part-3/
Basically like this:
kind: Kafka
metadata:
name: ...
spec:
kafka:
version: 2.7.0
replicas: 2
listeners:
plain: {}
tls:
authentication:
type: tls
external:
type: route
tls: true
authentication:
type: tls
authorization:
type: simple
According to the article above, I can only access bootstrap server via port 443. Basically, this set up works and does what I need.
I am wondering if I can get external access to Zookeper to manage cluster via command line from my machine? And if yes, should I download Kafka binaries and use CLI from archive? Or I need to login to Zookeeper Pod (e.g. via OpenShift UI) and manage Kafka cluster via CLI from there?
Thanks in advance.
Strimzi does not provide any access to Zookeeper. It is locked down using mTLS and network policies. If you really need it, you can use this unofficial project https://github.com/scholzj/zoo-entrance and create a route manually your self. But it is not secure - so use it on your own risk. Openin a temrinal inside the Zookeeper pod would be an option as well. But in most cases, you should not need Zookeeper access today as Kafka is anyway preparing for its removal.

TopicAuthorizationException

I create a KafkaUser to access Kafka topic on cloud from external, its definition as following, I can use SSL mode to access this topic from external with port 9094.
apiVersion: kafka.strimzi.io/v1beta1
kind: KafkaUser
name: data-user
namespace: abc
labels:
strimzi.io/cluster: data-cluster
spec:
authentication:
type: tls
authorization:
acls:
- host: '*'
operation: All
resource:
name: data-topic
patternType: literal
type: topic
type: allow
- host: '*'
operation: All
resource:
name: data-group
patternType: literal
type: group
type: allow
- host: '*'
operation: All
resource:
name: data-cluster
patternType: literal
type: cluster
type: allow
type: simple
Now inside cloud, I am going to use port 9092 to access this topic without any authentication and authorization, is it possible?
When I run consumer, it complains TOPIC_AUTHORIZATION_FAILED.
bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --group data-group --topic data-topic
[2021-03-06 19:54:22,689] WARN [Consumer clientId=consumer-data-group-1, groupId=data-group] Error while fetching metadata with correlation id 2 : {data-topic=TOPIC_AUTHORIZATION_FAILED} (org.apache.kafka.clients.NetworkClient)
[2021-03-06 19:54:22,692] ERROR [Consumer clientId=consumer-data-group-1, groupId=data-group] Topic authorization failed for topics [data-topic] (org.apache.kafka.clients.Metadata)
[2021-03-06 19:54:22,696] ERROR Error processing message, terminating consumer process: (kafka.tools.ConsoleConsumer$)
org.apache.kafka.common.errors.TopicAuthorizationException: Not authorized to access topics: [osprey2-topic]
Processed a total of 0 messages
My question is, I want to access topic with port 9092 without any authorization, how to do it?
Kafka supports only cluster-wide authorization. So you can create multiple listeners with different or no authentication. But you can enable or disable authorization only once for the whole cluster. So it is not possible to have the internal listener without authorization. When the user connects over the listener without any authentication, it will be connected as the ANONYMOUS user and this user will be checked for ACLs as any other user and in case it does not have them it will nto be allowed to do anything.
You can work around this problem by using the Kafka Admin API and giving the ANONYMOUS user the rights for all actions you want to take from the 9092 port. However, it is definitely a bad practice from security perspective. You should instead use proper authentication on the 9092 interface as well and that will allow you to give users the right ACLs which they need. If you for some reason do not want to use SSL on the 9092 listener, you can still use for example SCRAM-SHA-512 authentication.

Kafka Strimzi on AKS: How to config PLAIN listeners when external listeners(ScramSha512, TLS encryption) & ACL is already configured

I have already configured the external listeners and ACL as it follows (those are functional)
listeners:
plain: {}
external:
type: loadbalancer
tls: true
authentication:
type: scram-sha-512
authorization:
type: simple
This is hosted on a AKS cluster. Consuming from on prem it does work already and also producing from inside the cluster but with publicIP:9094 as bootstrapServers.
I would like to add a producer inside the cluster and use 9092 port (like this BootstrapServers = "my-cluster-kafka-bootstrap:9092", I suppose) with minumun necessary effort (even without ACL if possible, without any authentication, without any encryption).
What would recommend?
Thank you in advance!
SO thanks to Jakub from Strimzi https://github.com/scholzj, this is the minimal config
listeners:
plain:
authentication:
type: scram-sha-512
external:
type: loadbalancer
tls: true
authentication:
type: scram-sha-512
authorization:
type: simple
And the C# client would look like this
var conf = new ProducerConfig
{
BootstrapServers = "my-cluster-kafka-bootstrap:9092",
SecurityProtocol = SecurityProtocol.SaslPlaintext,
SaslMechanism = SaslMechanism.ScramSha512,
SaslUsername = "admin",
SaslPassword = "xxx",
};
Explanation would be:
Kafka has only one authorizer configuration for the whole cluster. So
it is either on or off for all listeners. The plain listener is
without encryption, but if you need authorization for the external
listener, you will need enable authentication there (since without
authentication the authorization would deny everything).

How to access kafka installed outside kubernetes cluster from a service provisioned inside a kubernetes cluster

My setup is like, I have a producer service provisioned as part of minikube cluster and it is trying to publish messages to a kafka instance running on the host machine.
I have written a kafka service and endpoints yaml as follows:
kind: Service
apiVersion: v1
metadata:
name: kafka
spec:
ports:
- name: "broker"
protocol: "TCP"
port: 9092
targetPort: 9092
nodePort: 0
---
kind: Endpoints
apiVersion: v1
metadata:
name: kafka
namespace: default
subsets:
- addresses:
- ip: 10.0.2.2
ports:
- name: "broker"
port: 9092
The ip address of the host machine from inside the minikube cluster mentioned in the endpoint is acquired from the following command:
minikube ssh "route -n | grep ^0.0.0.0 | awk '{ print \$2 }'"
The problem I am facing is that the topic is getting created when producer tries to publish message for the first time but no messages are getting written on to that topic.
Digging into the pod logs, I found that producer is trying to connect to kafka instance on localhost or something (not really sure of it).
2020-05-17T19:09:43.021Z [warn] org.apache.kafka.clients.NetworkClient [] -
[Producer clientId=/system/sharding/kafkaProducer-greetings/singleton/singleton/producer]
Connection to node 0 (omkara/127.0.1.1:9092) could not be established. Broker may not be available.
Following which I suspected that probably I need to modify server.properties with the following change:
listeners=PLAINTEXT://localhost:9092
This however resulted in the change in the ip address in the log:
2020-05-17T19:09:43.021Z [warn] org.apache.kafka.clients.NetworkClient [] -
[Producer clientId=/system/sharding/kafkaProducer-greetings/singleton/singleton/producer]
Connection to node 0 (omkara/127.0.0.1:9092) could not be established. Broker may not be available.
I am not sure what ip address must be mentioned here? Or what is an alternate solution? And if it is even possible to connect from inside the kubernetes cluster to the kafka instance installed outside the kubernetes cluster.
Since producer kafka client is on the very same network as the brokers, we need to configure an additional listener like so:
listeners=INTERNAL://0.0.0.0:9093,EXTERNAL://0.0.0.0:9092
listener.security.protocol.map=INTERNAL:PLAINTEXT,EXTERNAL:PLAINTEXT
advertised.listeners=INTERNAL://localhost:9093,EXTERNAL://10.0.2.2:9092
inter.broker.listener.name=INTERNAL
We can verify messages in topic like so:
kafka-console-consumer.sh --bootstrap-server INTERNAL://0.0.0.0:9093 --topic greetings --from-beginning
{"name":"Alice","message":"Namastey"}
You can find a detailed explaination on understanding & provisioning kafka listeners here.

Strimzi operator Kafka cluster ACL not enabling with type: simple

We know to enable Kafka ACL property authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer to be added in server.properties but how to enable that if Kafka cluster is running by Strimzi operator?
From Strimzi documents I have come to know in order to enable authorization, need to enable below code for kind: Kafka under spec:
listeners:
tls:
authentication:
type: tls
Full code #kafka-zookeeper-apps-tls-enabled.yml
Also the below code for kind: KafkaUser
authentication:
type: tls
authorization:
type: simple
Full code#example-consumer-deny-deployment-authentication-TLS-alias-SSL.yml
In above example-consumer-deny-deployment-authentication-TLS-alias-SSL.yml code although ACL type: deny
am still able to consume messages.
Problem is even with the above code I see in kafka my-cluster-kafka-0 pod environment variable KAFKA_AUTHORIZATION_TYPE=simple is absent even the authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer is absent in server.properties
Note: No Warnings/Errors in log of strimzi-cluster-operator pod while deploying above code.
I am working on Strimzi for 1st time so please help me enabling ACL's.
Your Kafka custom resource doesn't enable authorization because the authorization section is not in the right place. You need to add the authorization section like this:
listeners:
tls:
authentication:
type: tls
external:
type: route
authentication:
type: tls
authorization:
type: simple
superUsers:
- CN=my-user
You can read more about it in the documentation: https://strimzi.io/docs/latest/full.html#assembly-kafka-authentication-and-authorization-deployment-configuration-kafka