no matches for kind "KafkaConnect" in version "kafka.strimzi.io/v1beta2" - kubernetes

I'm trying work with Strimzi to create kafka-connect cluster and encountering the following error
unable to recognize "kafka-connect.yaml": no matches for kind "KafkaConnect" in
version "kafka.strimzi.io/v1beta2"
Here's the kafka-connect.yaml I have:
apiVersion: kafka.strimzi.io/v1beta2
kind: KafkaConnect
metadata:
name: kafka-connect
namespace: connect
annotations:
strimzi.io/use-connector-resources: "true"
spec:
version: 2.4.0
replicas: 1
bootstrapServers: host:port
tls:
trustedCertificates:
- secretName: connectorsecret
certificate: cert
config:
group.id: o
offset.storage.topic: strimzi-connect-cluster-offsets
config.storage.topic: strimzi-connect-cluster-configs
status.storage.topic: strimzi-connect-cluster-status
sasl.mechanism: scram-sha-256
security.protocol: SASL_SSL
secretName: connectorsecret
sasl.jaas.config: org.apache.kafka.common.security.scram.ScramLoginModule required username=username password=password
Then I tried to apply the config via kubectl apply -f kafka-connect.yaml
Is there anything necessities to create resources using Strimzi or something I'm doing wrong?

I think there are two possibilities:
You did not installed the CRD resources
You are using Strimzi version which is too old and does not support the v1beta2 API
Judging that you are trying to use Kafka 2.4.0, I guess the second option is more likely. If you Ĺ•eally want to do that, you should make sure to use the documentation, examples and everything from the version of Strimzi you use- they should be useing one of the older APIs (v1alpha1 or v1beta1).
But in general, I would strongly recommend you to use the latest version of Strimzi and not a version which is several years old.
One more note: If you want to configure the SASL authentication for your Kafka Connect cluster, you should do it in the .spec.authentication section of the custom resource and not in .spec.config.

Related

how to pass MongoDB tls certificates while creating debezium mongodb kafka connector?

We have MongoDB cluster with three replicas. I have enabled preferred TLS and authentication type as MongoDB-X509.
We have three broker strimzi kafka cluster and connect cluster with all required plugins (i.e. mongoDB provided by debezium) up and running.
Below provided partial connect.yaml file used for kafka connect deployment:-
apiVersion: kafka.strimzi.io/v1beta2
kind: KafkaConnect
metadata:
name: my-connect
spec:
config:
config.providers: directory
config.providers.directory.class: org.apache.kafka.common.config.provider.DirectoryConfigProvider
externalConfiguration:
volumes:
- name: connector-config
secret:
secretName: mysecret
deployment works fine and able to see ca.pem and mongo-server.pem file in /opt/kafka/external-configuration/connector-config directory.
After then I am trying to create mongoDB connector with configuration files as give below, but not sure on exact way of passing certificates. As there is no sample configuration file available for mongoDb connectors. Could you please help on this by providing some sample configuration.
I tried below configuration file:-
apiVersion: kafka.strimzi.io/v1beta2
kind: KafkaConnector
metadata:
name: my-source-connector
labels:
strimzi.io/cluster: my-connect-cluster
spec:
class: io.debezium.connector.mongodb.MongoDbConnector
tasksMax: 2
config:
ssl.truststore.type: PEM
ssl.truststore.location: "${directory:/opt/kafka/external-configuration/connector-config:ca.pem}"
ssl.keystore.type: PEM
ssl.keystore.location: "${directory:/opt/kafka/external-configuration/connector-config:mongo-server.pem}"
"mongodb.hosts": "rs0/192.168.99.100:27017"
"mongodb.name": "fullfillment"
"collection.include.list": "inventory[.]*"
"mongodb.ssl.enabled": true
"mongodb.ssl.invalid.hostname.allowed": true
but it was throwing syntax error. Please help on this by providing sample mongoDB connector.yaml?
As for Strimzi, you can use the external configuration to mount Secrets or Config Maps into the Strimzi Kafka Connect deployment: https://strimzi.io/docs/operators/latest/full/using.html#type-ExternalConfiguration-reference. Once it is loaded into the Pods, you can either just refer to it using the file path, or you can use the Kafka configuration providers to load the data into the configuration options.

Is it possible to access Zookeeper in Strimzi Kafka installed with Route listener type on OpenShift?

I have Strimzi Kafka cluster on OpenShift, configured like described here:
https://strimzi.io/blog/2019/04/30/accessing-kafka-part-3/
Basically like this:
kind: Kafka
metadata:
name: ...
spec:
kafka:
version: 2.7.0
replicas: 2
listeners:
plain: {}
tls:
authentication:
type: tls
external:
type: route
tls: true
authentication:
type: tls
authorization:
type: simple
According to the article above, I can only access bootstrap server via port 443. Basically, this set up works and does what I need.
I am wondering if I can get external access to Zookeper to manage cluster via command line from my machine? And if yes, should I download Kafka binaries and use CLI from archive? Or I need to login to Zookeeper Pod (e.g. via OpenShift UI) and manage Kafka cluster via CLI from there?
Thanks in advance.
Strimzi does not provide any access to Zookeeper. It is locked down using mTLS and network policies. If you really need it, you can use this unofficial project https://github.com/scholzj/zoo-entrance and create a route manually your self. But it is not secure - so use it on your own risk. Openin a temrinal inside the Zookeeper pod would be an option as well. But in most cases, you should not need Zookeeper access today as Kafka is anyway preparing for its removal.

How to use Kafka connect in Strimzi

I am using Kafka with strimzi operator, I created a Kafka cluster and and also deployed Kafka connect using yml file. But after this I am totally blank what to do next . I read that Kafka connect is used to copy data from a source to Kafka cluster or from Kafka cluster to another destination.
I want to use Kafka connect to copy the data from a file to Kafka cluster's any topic.
Can any one please help me how can I do that I am sharing the yml file using which I created my Kafka connect cluster.
apiVersion: kafka.strimzi.io/v1beta1
kind: KafkaConnect
metadata:
name: my-connect-cluster
# annotations:
# # use-connector-resources configures this KafkaConnect
# # to use KafkaConnector resources to avoid
# # needing to call the Connect REST API directly
# strimzi.io/use-connector-resources: "true"
spec:
version: 2.6.0
replicas: 1
bootstrapServers: my-cluster-kafka-bootstrap:9093
tls:
trustedCertificates:
- secretName: my-cluster-cluster-ca-cert
certificate: ca.crt
config:
group.id: connect-cluster
offset.storage.topic: connect-cluster-offsets
config.storage.topic: connect-cluster-configs
status.storage.topic: connect-cluster-status
#kubeclt create -f kafka-connect.yml -n strimzi
After that pod for Kafka connect is in running status ,I don't know what to do next. Please help me.
Kafka Connect exposes a REST API, so you need to expose that HTTP endpoint from the Connect pods
I read that Kafka connect is used to copy data from a source to Kafka cluster or from Kafka cluster to another destination.
That is one application, but sounds like you want MirrorMaker2 instead for that
If you don't want to use the REST API, then uncomment this line
# strimzi.io/use-connector-resources: "true"
and use another YAML file to configure the Connect resources , as shown here for Debezium. See kind: "KafkaConnector"
Look at this simple example from scratch. Not really what you want to do, but pretty close. We are sending messages to a topic using the kafka-console-producer.sh and consuming them using a file sink connector.
The example also shows how to include additional connectors by creating your own custom Connect image, based on the Strimzi one. This step would be needed for more complex examples involving external systems.

strimzi operator 0.20 kafka 'useServiceDnsDomain' has no effect

PROBLEM:
For some reason client pod can only resolve fully qualified fully-qualified DNS names including the cluster service suffix.
That problem is stated in this quesion:
AKS, WIndows Node, dns does not resolve service until fully qualified name is used
To workaround this issue, I'm using useServiceDnsDomain flag. The documentation (https://strimzi.io/docs/operators/master/using.html#type-GenericKafkaListenerConfiguration-schema-reference ) explains it as
Configures whether the Kubernetes service DNS domain should be used or
not. If set to true, the generated addresses with contain the service
DNS domain suffix (by default .cluster.local, can be configured using
environment variable KUBERNETES_SERVICE_DNS_DOMAIN). Defaults to
false.This field can be used only with internal type listener.
Part of my yaml is as follows
apiVersion: kafka.strimzi.io/v1beta1
kind: Kafka
metadata:
name: tt-kafka
namespace: shared
spec:
kafka:
version: 2.5.0
replicas: 3
listeners:
- name: local
port: 9092
type: internal
tls: false
useServiceDnsDomain: true
This didn't do anything, so I also tried adding KUBERNETES_SERVICE_DNS_DOMAIN like below
template:
kafkaContainer:
env:
- name: KUBERNETES_SERVICE_DNS_DOMAIN
value: .cluster.local
strimzi/operator:0.20.0 image is being used.
In my client (.net Confluent.Kafka 1.4.4), I used tt-kafka-kafka-bootstrap.shared.svc.cluster.local as BootstrapServers.
It gives me error as
Error: GroupCoordinator: Failed to resolve
'tt-kafka-kafka-2.tt-kafka-kafka-brokers.shared.svc:9092': No such
host is known.
I'm expecting the broker service to provide full names to the client but from the error it seems useServiceDnsDomain has no effect.
Any help is appreciated. Thanks.
As exmaplained in https://github.com/strimzi/strimzi-kafka-operator/issues/3898, there is a typo in the docs. The right YAML is:
listeners:
- name: plain
port: 9092
type: internal
tls: false
configuration:
useServiceDnsDomain: true
If your domain is different from .cluster.local, you can use the KUBERNETES_SERVICE_DNS_DOMAIN env var to override it. But you have to configure it on the Strimzi Cluster Operator pod. Not on the Kafka pods: https://strimzi.io/docs/operators/latest/full/using.html#ref-operator-cluster-str

How to find the correct api version in Kubernetes?

I have a question about the usage of apiVersion in Kuberntes.
For example I am trying to deploy traefik 2.2.1 into my kubernetes cluster. I have a traefik middleware deployment definition like this:
---
apiVersion: traefik.containo.us/v1alpha1
kind: Middleware
metadata:
name: https-redirect
spec:
redirectScheme:
scheme: https
permanent: true
port: 443
When I try to deploy my objects with
$ kubectl apply -f middleware.yaml
I got the following error message:
unable to recognize "middleware.yaml": no matches for kind "Middleware" in version "traefik.containo.us/v1alpha1"
The same object works fine with Traefik version 2.2.0 but not with version 2.2.1.
On the traefik documentation there is no example other the ones using the version "traefik.containo.us/v1alpha1"
I dont't hink that my deployment issue is specific to traefik. It is a general problem with conflicting versions. Is there any way how I can figure out which apiVersions are supported in my cluster environment?
There are so many outdated examples posted around using deprecated apiVersions that I wonder if there is some kind of official apiVersion directory for kubernetes? Or maybe there is some kubectl command which I can ask for apiversions?
Most probably crds for traefik v2 are not installed. You could use below command which lists the API versions that are available on the Kubernetes cluster.
kubectl api-versions | grep traefik
traefik.containo.us/v1alpha1
Use below command to check crds installed on the Kubernetes cluster.
kubectl get crds
NAME CREATED AT
ingressroutes.traefik.containo.us 2020-05-09T13:58:09Z
ingressroutetcps.traefik.containo.us 2020-05-09T13:58:09Z
ingressrouteudps.traefik.containo.us 2020-05-09T13:58:09Z
middlewares.traefik.containo.us 2020-05-09T13:58:09Z
tlsoptions.traefik.containo.us 2020-05-09T13:58:09Z
tlsstores.traefik.containo.us 2020-05-09T13:58:09Z
traefikservices.traefik.containo.us 2020-05-09T13:58:09Z
Check traefik v1 vs v2 here
I found that if I just run the kubectl apply again after a few moments it will then work.