Want some practical example how to use kafkaUser - apache-kafka

I am using Kafka with strimzi operator. I don't know how to use KafkaUser can anyone please suggest to me where I should learn it's practical implementation. I just created a Kafka user and KafkaTopic now I am totally blank about what to do.
This is my KafkaUSer yml code :
apiVersion: kafka.strimzi.io/v1beta1
kind: KafkaUser
metadata:
name: my-user
labels:
strimzi.io/cluster: my-cluster
spec:
authentication:
type: tls
authorization:
type: simple
acls:
# Example consumer Acls for topic my-topic using consumer group my-group
- resource:
type: topic
name: my-topic
patternType: literal
operation: Read
host: "*"
- resource:
type: topic
name: my-topic
patternType: literal
operation: Describe
host: "*"
- resource:
type: group
name: my-group
patternType: literal
operation: Read
host: "*"
# Example Producer Acls for topic my-topic
- resource:
type: topic
name: my-topic
patternType: literal
operation: Write
host: "*"
- resource:
type: topic
name: my-topic
patternType: literal
operation: Create
host: "*"
- resource:
type: topic
name: my-topic
patternType: literal
operation: Describe
host: "*"
and this is my KafkaTopic yml file code :
apiVersion: kafka.strimzi.io/v1beta1
kind: KafkaTopic
metadata:
name: my-topic
labels:
strimzi.io/cluster: my-cluster
spec:
partitions: 1
replicas: 1
config:
retention.ms: 7200000
segment.bytes: 1073741824

If you enabled the tls authentication on the user I would expect that in your Kafka custom resource you have authentication enabled as well.
When the KafkaUser is created with this authentication type, a corresponding Secret is generated with user private key and certificate for the mutual TLS authentication with the broker.
You have to extract key and certificate from the Secret and configure your keystore in your client application (it depends on the language you are using. If it's Java you can just extract the keystore directly from the Secret in P12 format with corresponding password). If it's Java you can refer on official Kafka doc for setting up keystore and truststore when extracted from the Secrets: https://kafka.apache.org/documentation/#security_configclients
Having mutual TLS enabled authentication, it means that you also have to connect via TLS to the brokers (you have enabled it in the Kafka resource) so you have to extract from the cluster CA Secret, the certificate and import it into your truststore.
That point the client will be able to connect, to be authenticated and the ACLs you described will be applied.
More info are on the official documentation:
About user authentication
https://strimzi.io/docs/operators/master/using.html#con-securing-client-authentication-str
About clients running on Kubernetes connecting to the cluster
https://strimzi.io/docs/operators/master/using.html#configuring-internal-clients-to-trust-cluster-ca-str
About clients running outside Kubernetes connecting to the cluster
https://strimzi.io/docs/operators/master/using.html#configuring-external-clients-to-trust-cluster-ca-str

Related

Strimzi Kubernetes Kafka Cluster ID not matching

I am currently facing a strange behaviour which I cannot really grab.
I created a kafka cluster and everything worked fine. Then I wanted to rebuild on a different namespace so I deleted everything (kafka CRD, strimzi yaml and devices) and created everything new in the different namespace.
Now I am getting constant error messages (kafka is not starting up):
The Cluster ID oAF2NGklQ0mF2_f3U0oJuw doesn't match stored clusterId Some(Gjb2frSRSS-v6Bpjx-dyqg) in meta.properties. The broker is trying to join the wrong cluster. Configured zookeeper.connect may be wrong.
Thing is: I am not aware, that I kept some state between both namespaces (drop everything, formated filesystem). I even restarted the kubernetes cluster node to make sure that even emptyDir (strimzi-tmp) is reset, but no success...
The IDs stay the same (after reboot, rebuild of filesystem, drop and recreate of application).
My Cluster Config:
apiVersion: kafka.strimzi.io/v1beta2
kind: Kafka
metadata:
name: kafka-cluster
spec:
kafka:
version: 3.1.0
replicas: 1
listeners:
- name: tls
port: 9093
type: internal
tls: true
authentication:
type: tls
authorization:
type: simple
config:
offsets.topic.replication.factor: 1
transaction.state.log.replication.factor: 1
transaction.state.log.min.isr: 1
default.replication.factor: 1
min.insync.replicas: 1
inter.broker.protocol.version: "3.1"
storage:
type: persistent-claim
size: 20Gi
deleteClaim: false
jmxOptions: {}
zookeeper:
replicas: 1
storage:
type: persistent-claim
size: 20Gi
deleteClaim: false
entityOperator:
topicOperator: {}
userOperator: {}
Problem was, that the wrong persistent volume was mounted. The pod seems not to mount the requested iSCSI volume, but some local storage.

how to pass MongoDB tls certificates while creating debezium mongodb kafka connector?

We have MongoDB cluster with three replicas. I have enabled preferred TLS and authentication type as MongoDB-X509.
We have three broker strimzi kafka cluster and connect cluster with all required plugins (i.e. mongoDB provided by debezium) up and running.
Below provided partial connect.yaml file used for kafka connect deployment:-
apiVersion: kafka.strimzi.io/v1beta2
kind: KafkaConnect
metadata:
name: my-connect
spec:
config:
config.providers: directory
config.providers.directory.class: org.apache.kafka.common.config.provider.DirectoryConfigProvider
externalConfiguration:
volumes:
- name: connector-config
secret:
secretName: mysecret
deployment works fine and able to see ca.pem and mongo-server.pem file in /opt/kafka/external-configuration/connector-config directory.
After then I am trying to create mongoDB connector with configuration files as give below, but not sure on exact way of passing certificates. As there is no sample configuration file available for mongoDb connectors. Could you please help on this by providing some sample configuration.
I tried below configuration file:-
apiVersion: kafka.strimzi.io/v1beta2
kind: KafkaConnector
metadata:
name: my-source-connector
labels:
strimzi.io/cluster: my-connect-cluster
spec:
class: io.debezium.connector.mongodb.MongoDbConnector
tasksMax: 2
config:
ssl.truststore.type: PEM
ssl.truststore.location: "${directory:/opt/kafka/external-configuration/connector-config:ca.pem}"
ssl.keystore.type: PEM
ssl.keystore.location: "${directory:/opt/kafka/external-configuration/connector-config:mongo-server.pem}"
"mongodb.hosts": "rs0/192.168.99.100:27017"
"mongodb.name": "fullfillment"
"collection.include.list": "inventory[.]*"
"mongodb.ssl.enabled": true
"mongodb.ssl.invalid.hostname.allowed": true
but it was throwing syntax error. Please help on this by providing sample mongoDB connector.yaml?
As for Strimzi, you can use the external configuration to mount Secrets or Config Maps into the Strimzi Kafka Connect deployment: https://strimzi.io/docs/operators/latest/full/using.html#type-ExternalConfiguration-reference. Once it is loaded into the Pods, you can either just refer to it using the file path, or you can use the Kafka configuration providers to load the data into the configuration options.

TopicAuthorizationException

I create a KafkaUser to access Kafka topic on cloud from external, its definition as following, I can use SSL mode to access this topic from external with port 9094.
apiVersion: kafka.strimzi.io/v1beta1
kind: KafkaUser
name: data-user
namespace: abc
labels:
strimzi.io/cluster: data-cluster
spec:
authentication:
type: tls
authorization:
acls:
- host: '*'
operation: All
resource:
name: data-topic
patternType: literal
type: topic
type: allow
- host: '*'
operation: All
resource:
name: data-group
patternType: literal
type: group
type: allow
- host: '*'
operation: All
resource:
name: data-cluster
patternType: literal
type: cluster
type: allow
type: simple
Now inside cloud, I am going to use port 9092 to access this topic without any authentication and authorization, is it possible?
When I run consumer, it complains TOPIC_AUTHORIZATION_FAILED.
bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --group data-group --topic data-topic
[2021-03-06 19:54:22,689] WARN [Consumer clientId=consumer-data-group-1, groupId=data-group] Error while fetching metadata with correlation id 2 : {data-topic=TOPIC_AUTHORIZATION_FAILED} (org.apache.kafka.clients.NetworkClient)
[2021-03-06 19:54:22,692] ERROR [Consumer clientId=consumer-data-group-1, groupId=data-group] Topic authorization failed for topics [data-topic] (org.apache.kafka.clients.Metadata)
[2021-03-06 19:54:22,696] ERROR Error processing message, terminating consumer process: (kafka.tools.ConsoleConsumer$)
org.apache.kafka.common.errors.TopicAuthorizationException: Not authorized to access topics: [osprey2-topic]
Processed a total of 0 messages
My question is, I want to access topic with port 9092 without any authorization, how to do it?
Kafka supports only cluster-wide authorization. So you can create multiple listeners with different or no authentication. But you can enable or disable authorization only once for the whole cluster. So it is not possible to have the internal listener without authorization. When the user connects over the listener without any authentication, it will be connected as the ANONYMOUS user and this user will be checked for ACLs as any other user and in case it does not have them it will nto be allowed to do anything.
You can work around this problem by using the Kafka Admin API and giving the ANONYMOUS user the rights for all actions you want to take from the 9092 port. However, it is definitely a bad practice from security perspective. You should instead use proper authentication on the 9092 interface as well and that will allow you to give users the right ACLs which they need. If you for some reason do not want to use SSL on the 9092 listener, you can still use for example SCRAM-SHA-512 authentication.

Strimzi operator Kafka cluster ACL not enabling with type: simple

We know to enable Kafka ACL property authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer to be added in server.properties but how to enable that if Kafka cluster is running by Strimzi operator?
From Strimzi documents I have come to know in order to enable authorization, need to enable below code for kind: Kafka under spec:
listeners:
tls:
authentication:
type: tls
Full code #kafka-zookeeper-apps-tls-enabled.yml
Also the below code for kind: KafkaUser
authentication:
type: tls
authorization:
type: simple
Full code#example-consumer-deny-deployment-authentication-TLS-alias-SSL.yml
In above example-consumer-deny-deployment-authentication-TLS-alias-SSL.yml code although ACL type: deny
am still able to consume messages.
Problem is even with the above code I see in kafka my-cluster-kafka-0 pod environment variable KAFKA_AUTHORIZATION_TYPE=simple is absent even the authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer is absent in server.properties
Note: No Warnings/Errors in log of strimzi-cluster-operator pod while deploying above code.
I am working on Strimzi for 1st time so please help me enabling ACL's.
Your Kafka custom resource doesn't enable authorization because the authorization section is not in the right place. You need to add the authorization section like this:
listeners:
tls:
authentication:
type: tls
external:
type: route
authentication:
type: tls
authorization:
type: simple
superUsers:
- CN=my-user
You can read more about it in the documentation: https://strimzi.io/docs/latest/full.html#assembly-kafka-authentication-and-authorization-deployment-configuration-kafka

strimzi 0.14: Kafka broker failed authentication

I tried to build a Kafka cluster using Strimzi (0.14) in a Kubernetes cluster.
I use the examples come with the strimzi, i.e. examples/kafka/kafka-persistent.yaml.
This yaml file looks like this:
apiVersion: kafka.strimzi.io/v1beta1
kind: Kafka
metadata:
name: my-cluster
spec:
kafka:
version: 2.3.0
replicas: 3
listeners:
plain: {}
tls: {}
config:
offsets.topic.replication.factor: 3
transaction.state.log.replication.factor: 3
transaction.state.log.min.isr: 2
log.message.format.version: "2.3"
storage:
type: jbod
volumes:
- id: 0
type: persistent-claim
size: 12Gi
deleteClaim: false
zookeeper:
replicas: 3
storage:
type: persistent-claim
size: 9Gi
deleteClaim: false
entityOperator:
topicOperator: {}
userOperator: {}
kubectl apply -f examples/kafka/kafka-persistent.yaml
Both zookeepers and kafka brokers were brought up.
However, I saw errors in kafka broker logs:
[SocketServer brokerId=1] Failed authentication with /10.244.5.94 (SSL handshake failed) (org.apache.kafka.common.network.Selector) [data-plane-kafka-network-thread-1-ListenerName(REPLICATION)-SSL-0]
Anyone know how to fix the problem?
One of the things which can cause this is if your cluster is using different DNS suffix for service domains (default is .cluster.local). You need to find out the right DNS suffix and use the environment variable KUBERNETES_SERVICE_DNS_DOMAIN in the Strimzi Cluster Operator deployment to override the default value.
If you exec into one of the Kafka or Zookeeper pods and do hostname -f it should show you the full hostname from which you can identifythe suffix.
(I pasted this from comments as a full answer since it helped to solve the question.)