TopicAuthorizationException - apache-kafka

I create a KafkaUser to access Kafka topic on cloud from external, its definition as following, I can use SSL mode to access this topic from external with port 9094.
apiVersion: kafka.strimzi.io/v1beta1
kind: KafkaUser
name: data-user
namespace: abc
labels:
strimzi.io/cluster: data-cluster
spec:
authentication:
type: tls
authorization:
acls:
- host: '*'
operation: All
resource:
name: data-topic
patternType: literal
type: topic
type: allow
- host: '*'
operation: All
resource:
name: data-group
patternType: literal
type: group
type: allow
- host: '*'
operation: All
resource:
name: data-cluster
patternType: literal
type: cluster
type: allow
type: simple
Now inside cloud, I am going to use port 9092 to access this topic without any authentication and authorization, is it possible?
When I run consumer, it complains TOPIC_AUTHORIZATION_FAILED.
bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --group data-group --topic data-topic
[2021-03-06 19:54:22,689] WARN [Consumer clientId=consumer-data-group-1, groupId=data-group] Error while fetching metadata with correlation id 2 : {data-topic=TOPIC_AUTHORIZATION_FAILED} (org.apache.kafka.clients.NetworkClient)
[2021-03-06 19:54:22,692] ERROR [Consumer clientId=consumer-data-group-1, groupId=data-group] Topic authorization failed for topics [data-topic] (org.apache.kafka.clients.Metadata)
[2021-03-06 19:54:22,696] ERROR Error processing message, terminating consumer process: (kafka.tools.ConsoleConsumer$)
org.apache.kafka.common.errors.TopicAuthorizationException: Not authorized to access topics: [osprey2-topic]
Processed a total of 0 messages
My question is, I want to access topic with port 9092 without any authorization, how to do it?

Kafka supports only cluster-wide authorization. So you can create multiple listeners with different or no authentication. But you can enable or disable authorization only once for the whole cluster. So it is not possible to have the internal listener without authorization. When the user connects over the listener without any authentication, it will be connected as the ANONYMOUS user and this user will be checked for ACLs as any other user and in case it does not have them it will nto be allowed to do anything.
You can work around this problem by using the Kafka Admin API and giving the ANONYMOUS user the rights for all actions you want to take from the 9092 port. However, it is definitely a bad practice from security perspective. You should instead use proper authentication on the 9092 interface as well and that will allow you to give users the right ACLs which they need. If you for some reason do not want to use SSL on the 9092 listener, you can still use for example SCRAM-SHA-512 authentication.

Related

Is it possible to access Zookeeper in Strimzi Kafka installed with Route listener type on OpenShift?

I have Strimzi Kafka cluster on OpenShift, configured like described here:
https://strimzi.io/blog/2019/04/30/accessing-kafka-part-3/
Basically like this:
kind: Kafka
metadata:
name: ...
spec:
kafka:
version: 2.7.0
replicas: 2
listeners:
plain: {}
tls:
authentication:
type: tls
external:
type: route
tls: true
authentication:
type: tls
authorization:
type: simple
According to the article above, I can only access bootstrap server via port 443. Basically, this set up works and does what I need.
I am wondering if I can get external access to Zookeper to manage cluster via command line from my machine? And if yes, should I download Kafka binaries and use CLI from archive? Or I need to login to Zookeeper Pod (e.g. via OpenShift UI) and manage Kafka cluster via CLI from there?
Thanks in advance.
Strimzi does not provide any access to Zookeeper. It is locked down using mTLS and network policies. If you really need it, you can use this unofficial project https://github.com/scholzj/zoo-entrance and create a route manually your self. But it is not secure - so use it on your own risk. Openin a temrinal inside the Zookeeper pod would be an option as well. But in most cases, you should not need Zookeeper access today as Kafka is anyway preparing for its removal.

Use plain mode to access kafka

I have a Kafka cluster on OpenShift cloud, a customer outside of cloud send message to Kafka cluster, then we set authentication and authorization for Kafka cluster and topic, ans create an user.
Now from external customer can publish messages to kafka topic via SSL or SASL via port 9094, now app inside OpenShift want to consume messages via port 9092 with plain mode, that means, app inside OpenShift doesn't want to use any authentication or authorization to access that topic, do you think if it is possible?
This is my Kafka cluster listener
kafka:
authorization:
type: simple
listeners:
external:
authentication:
type: tls
port: 9094
tls: true
type: route
plain:
port: 9092
tls: false
type: internal

Want some practical example how to use kafkaUser

I am using Kafka with strimzi operator. I don't know how to use KafkaUser can anyone please suggest to me where I should learn it's practical implementation. I just created a Kafka user and KafkaTopic now I am totally blank about what to do.
This is my KafkaUSer yml code :
apiVersion: kafka.strimzi.io/v1beta1
kind: KafkaUser
metadata:
name: my-user
labels:
strimzi.io/cluster: my-cluster
spec:
authentication:
type: tls
authorization:
type: simple
acls:
# Example consumer Acls for topic my-topic using consumer group my-group
- resource:
type: topic
name: my-topic
patternType: literal
operation: Read
host: "*"
- resource:
type: topic
name: my-topic
patternType: literal
operation: Describe
host: "*"
- resource:
type: group
name: my-group
patternType: literal
operation: Read
host: "*"
# Example Producer Acls for topic my-topic
- resource:
type: topic
name: my-topic
patternType: literal
operation: Write
host: "*"
- resource:
type: topic
name: my-topic
patternType: literal
operation: Create
host: "*"
- resource:
type: topic
name: my-topic
patternType: literal
operation: Describe
host: "*"
and this is my KafkaTopic yml file code :
apiVersion: kafka.strimzi.io/v1beta1
kind: KafkaTopic
metadata:
name: my-topic
labels:
strimzi.io/cluster: my-cluster
spec:
partitions: 1
replicas: 1
config:
retention.ms: 7200000
segment.bytes: 1073741824
If you enabled the tls authentication on the user I would expect that in your Kafka custom resource you have authentication enabled as well.
When the KafkaUser is created with this authentication type, a corresponding Secret is generated with user private key and certificate for the mutual TLS authentication with the broker.
You have to extract key and certificate from the Secret and configure your keystore in your client application (it depends on the language you are using. If it's Java you can just extract the keystore directly from the Secret in P12 format with corresponding password). If it's Java you can refer on official Kafka doc for setting up keystore and truststore when extracted from the Secrets: https://kafka.apache.org/documentation/#security_configclients
Having mutual TLS enabled authentication, it means that you also have to connect via TLS to the brokers (you have enabled it in the Kafka resource) so you have to extract from the cluster CA Secret, the certificate and import it into your truststore.
That point the client will be able to connect, to be authenticated and the ACLs you described will be applied.
More info are on the official documentation:
About user authentication
https://strimzi.io/docs/operators/master/using.html#con-securing-client-authentication-str
About clients running on Kubernetes connecting to the cluster
https://strimzi.io/docs/operators/master/using.html#configuring-internal-clients-to-trust-cluster-ca-str
About clients running outside Kubernetes connecting to the cluster
https://strimzi.io/docs/operators/master/using.html#configuring-external-clients-to-trust-cluster-ca-str

How to access kafka installed outside kubernetes cluster from a service provisioned inside a kubernetes cluster

My setup is like, I have a producer service provisioned as part of minikube cluster and it is trying to publish messages to a kafka instance running on the host machine.
I have written a kafka service and endpoints yaml as follows:
kind: Service
apiVersion: v1
metadata:
name: kafka
spec:
ports:
- name: "broker"
protocol: "TCP"
port: 9092
targetPort: 9092
nodePort: 0
---
kind: Endpoints
apiVersion: v1
metadata:
name: kafka
namespace: default
subsets:
- addresses:
- ip: 10.0.2.2
ports:
- name: "broker"
port: 9092
The ip address of the host machine from inside the minikube cluster mentioned in the endpoint is acquired from the following command:
minikube ssh "route -n | grep ^0.0.0.0 | awk '{ print \$2 }'"
The problem I am facing is that the topic is getting created when producer tries to publish message for the first time but no messages are getting written on to that topic.
Digging into the pod logs, I found that producer is trying to connect to kafka instance on localhost or something (not really sure of it).
2020-05-17T19:09:43.021Z [warn] org.apache.kafka.clients.NetworkClient [] -
[Producer clientId=/system/sharding/kafkaProducer-greetings/singleton/singleton/producer]
Connection to node 0 (omkara/127.0.1.1:9092) could not be established. Broker may not be available.
Following which I suspected that probably I need to modify server.properties with the following change:
listeners=PLAINTEXT://localhost:9092
This however resulted in the change in the ip address in the log:
2020-05-17T19:09:43.021Z [warn] org.apache.kafka.clients.NetworkClient [] -
[Producer clientId=/system/sharding/kafkaProducer-greetings/singleton/singleton/producer]
Connection to node 0 (omkara/127.0.0.1:9092) could not be established. Broker may not be available.
I am not sure what ip address must be mentioned here? Or what is an alternate solution? And if it is even possible to connect from inside the kubernetes cluster to the kafka instance installed outside the kubernetes cluster.
Since producer kafka client is on the very same network as the brokers, we need to configure an additional listener like so:
listeners=INTERNAL://0.0.0.0:9093,EXTERNAL://0.0.0.0:9092
listener.security.protocol.map=INTERNAL:PLAINTEXT,EXTERNAL:PLAINTEXT
advertised.listeners=INTERNAL://localhost:9093,EXTERNAL://10.0.2.2:9092
inter.broker.listener.name=INTERNAL
We can verify messages in topic like so:
kafka-console-consumer.sh --bootstrap-server INTERNAL://0.0.0.0:9093 --topic greetings --from-beginning
{"name":"Alice","message":"Namastey"}
You can find a detailed explaination on understanding & provisioning kafka listeners here.

Strimzi operator Kafka cluster ACL not enabling with type: simple

We know to enable Kafka ACL property authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer to be added in server.properties but how to enable that if Kafka cluster is running by Strimzi operator?
From Strimzi documents I have come to know in order to enable authorization, need to enable below code for kind: Kafka under spec:
listeners:
tls:
authentication:
type: tls
Full code #kafka-zookeeper-apps-tls-enabled.yml
Also the below code for kind: KafkaUser
authentication:
type: tls
authorization:
type: simple
Full code#example-consumer-deny-deployment-authentication-TLS-alias-SSL.yml
In above example-consumer-deny-deployment-authentication-TLS-alias-SSL.yml code although ACL type: deny
am still able to consume messages.
Problem is even with the above code I see in kafka my-cluster-kafka-0 pod environment variable KAFKA_AUTHORIZATION_TYPE=simple is absent even the authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer is absent in server.properties
Note: No Warnings/Errors in log of strimzi-cluster-operator pod while deploying above code.
I am working on Strimzi for 1st time so please help me enabling ACL's.
Your Kafka custom resource doesn't enable authorization because the authorization section is not in the right place. You need to add the authorization section like this:
listeners:
tls:
authentication:
type: tls
external:
type: route
authentication:
type: tls
authorization:
type: simple
superUsers:
- CN=my-user
You can read more about it in the documentation: https://strimzi.io/docs/latest/full.html#assembly-kafka-authentication-and-authorization-deployment-configuration-kafka