Is the kafka consumer 0.10 compatible with 0.9 broker?
If I'm not mistaken the 0.9 consumer is still considered beta whereas 0.10 is stable, right? That's why I'm interested in using the 0.10 version but my broker version is 0.9 and I wouldn't like to upgrade that yet.
If you want to use 0.10 clients you need to upgrade your cluster to 0.10.
Kafka is backward compatible with regards to clients but not forward compatible. That is, a 0.9 client can use a 0.10 cluster but a 0.10 client can not use a 0.9 cluster.
The idea is to upgrade your cluster first to 0.10, and then gradually migrating clients from 0.9 to 0.10.
My answer is not only for 0.10.0 client. I search Kafka client & broker version compatibility and reach here, so I leave a more complete answer for future reader.
According to this official post:
The “bidirectional” client compatibility work done in KIP-35 and KIP-97 removed these limitations. New Java clients can now communicate with old brokers.
Improved client compatibility is a new feature in Kafka 0.10.2. It is supported by brokers that are at version 0.10.0 or later.
For example, if we use client in 2.0.0, we can use broker 0.10.0 and all other later version (and of course, new feature will not supported). But if we use client in 0.10.1, we can only communicate with broker from 0.10.1 and later version.
So, kafka consumer 0.10 is not compatible with 0.9 broker.
You didn't say what language your client is written in and which client libraries you are using. Some clients (like those based on librdkafka) can handle connections to an older broker but the general rule (which is also true for the default Apache kafka java clients) is that the broker must be of equal or higher version number than the clients. In other words, Kafka is backward compatible, but it is not yet fully forward compatible.
Related
we have production Kafka cluster with 2.7 version , 5 Kafka brokers on RHEL 7.9 version
we want to upgrade the Kafka version to 3.X version
3.X version not include zookeeper , so we are wondering if we can do upgrade without any data loss
regarding to kafka 2.7 version , Kafka storing the metadata on zookeeper servers ( as brokers ids , topics names etc )
but is it possible to perform rolling upgrade from 2.7 to 3.x version without any data loss?
The upgrade guide should contain all infos you need.
While KRaft mode (without ZooKeeper) is production ready since 3.3, they still keep ZooKeeper around for compatibility until the 4.0 release.
Furthermore If I understand correctly, it is currently only possible to set up a fresh cluster in KRaft mode, but not to migrate an existing one with ZooKeeper. Kafka 3.5 will be a migration version they intend you to migrate from ZooKeeper to KRaft.
This is explained quite nicely in the release notes from Kafka, especially for Kafka 3.3 and the release video
As long as your Kafka brokers are not running with Java 8 still, you can simply do a rolling upgrade from 2.7 to 3.X like you are used to.
We are planning to upgrade our Kafka client from 2.0.0 to a later version that is be compatible with Kafka broker 2.1.x .
What is the latest Kafka client version that is compatible with Kafka broker 2.1.x?
Is it possible to upgrade from 2.0.0 to 2.3.x?
Is there a compatibility matrix for latest Kafka broker and client versions? Did anyone face any issues with such upgrades?
I have 5 kafka clusters and one of them runs kafkamirror to consume from the other 4 kakfa and produce for the main kafka cluster.
I've tried the latest version of kafka separately and my apps seems to work fine with the newest version. The upgrade process, as described in the Kafka upgrade https://kafka.apache.org/documentation/#upgrade is a rolling update, first going to 2.5.0 and maintaining the broker protocol with 0.10.0 and then a rolling restart with the newer protocol
The problem is the mirrormaker and have no downtime in the upgrade process to avoid loss of messages. I've read that consumers should be upgraded first, in this case mirrormaker is producer and consumer at the same time, so I tryed to upgrade my main kafka cluster first and now is running with 2.5.0 with 2.5.0 protocol and the other cluster of my tests it's still on 0.10.0. When I try to initiate the mirror I am getting some warnings/errors because I am still using old deprecated configs:
Which path should I follow to do the upgrade?
I have a kafka producer that is using 0.80 Kafka version on one machine (eg. ip is 1.2.3.4), can I use kafka consumer that is using 0.10 Kafka version on another machine to consume the message?
I tried to write a consumer on the newer version which listen to 1.2.3.4:9092. But it says kafka.errors.NoBrokersAvailable: NoBrokersAvailable. Is that not allowed? Or did I set something wrong?
Thanks.
Version 0.10.2 clients can talk to version 0.10.0 or newer brokers. However, if your brokers are older than 0.10.0, you must upgrade all the brokers in the Kafka cluster before upgrading your clients. Version 0.10.2 brokers support 0.8.x and newer clients.
https://kafka.apache.org/documentation/#upgrade_10_2_0
I need to use confluent schema-registry to connect to 0.9 kafka. But for schema-registry 2.0.1 version I didn't see SSL configuration available. Is there a way to enable ssl for schema-registry and kafka-rest to talk to 0.9 kafka?
That functionality was only added as of the 3.0 release line. If you want support for 0.9/2.0 releases, you could try either a) taking a 3.0 release and reducing the Kafka dependency to 0.9 versions (this might work, but I think there were other unrelated changes that might require additional changes), or b) backport the SSL changes to a 2.0 version (although I can't be sure off the top of my head whether this version supports everything required to enable security support).
Note that if you follow the upgrade docs, upgrading brokers to enable use of newer client versions is generally relatively painless these days.