Can previous versions of ActiveMQ Artemis interact with the current version? - activemq-artemis

For example could a 2.1X (X < 9) client interact with a 2.19 server?

In general older clients should work with newer brokers, although there may be new features in the broker which the older clients can't use properly.

Related

strimzi kafka operator have supported kafka versions

Why does the strimzi kafka operator have supported kafka versions; why do I care about this, if the version of kafka is being managed by the operator?
Is this only mentioned for client support?
The Apache Kafka versions supported by the different Strimzi versions are listed on the Strimzi website. Supported in this case means the versions for which we ship container images and which were tested. There are several reasons why we don't support more versions:
While you might not care about this, if the version of kafka is being managed by the operator, the operator still cares because it needs to understand what it operates because it encodes the operational knowledge.
As any other software, also Apache Kafka evolves, APIs (for example around the Admin APIs) and configurations (e.g. new options are added in different versions and the operator needs to understand them to validate them or update them) are changing etc. So supporting old versions is not always easy without code complexity.
We have limited resources to build and test the software. Both in terms of contributors but also as CI resources to run the build and test pipelines.
The current Strimzi commitment to what Kafka versions does it support is listed here. If you are interested, you can always join the project and help to make things better. Sicne Strimzi is open source, you can also always try to add another Kafka versions yourself and build and test it.
The Kafka consumers and producers have normally very good backwards / forwards compatibility. So you do not necessarily need to always use the same version of the clients as the brokers.

Kafka Streams client library compatibility with kafka broker version

I am using kafka client & streams library version 2.7.0 for building my application. However the kafka brokers(2 different clusters) are at older version ( 2.4.1 & 2.6.0).
As i understand we can use the latest clients & Streams library and it should run fine with older version of kafka brokers. Am i correct ? Is there any compatibility matrix between client & streams library with kafka brokers ?
I tried running in my application (with 2.7.0 client library) in local environment ( with kafka version 2.6.0) and it worked fine but wanted to get the supported compatibility between them
Update: As onecricketeer has helpfully pointed out, you can refer to the Kafka Compatability Matrix. He also notes:
There is a general answer. Clients above 0.10.2 work with brokers down to that version for all basic functionality until stated otherwise. Extra functionality includes transactional/idempotence and record headers, which Spring may depend on, but Kafka Streams natively has no dependency on.
Additionally, the upgrade section of the Kafka Documentation provides guidance on upgrade order for various Kafka versions.
The compatability matrix provided by the spring-cloud-stream project may also be of assistance.

kafka broker and client versions compatibility

We are planning to upgrade Kafka broker to 2.12.X but our Kafka clients are still going to use 0.10.x or higher versions
On local, we have verified and not seen any issues in producing and consuming with older client versions mentioned above while the broker is upgraded to kafka_2.12-2.3.0
Is there a compatibility matrix for Kafka broker and client versions mentioned? Did anyone face any issues with such upgrades?
PS -
I went through below link
https://cwiki.apache.org/confluence/display/KAFKA/Compatibility+Matrix
As the link says, broker 1.0.0 (and up) will work for basic client interaction on any client that supports features added in KIP-35
The main missing features of clients before 0.11 or 1.0 would be the message headers and idempotent producer or exactly-once processing semantics for Kafka Streams
You'll also want to be careful on upgrading the log format version too soon, because as the Kafka upgrade steps say, you should only change that once most of the clients have upgraded

Update Kafka 1 to Kafka 2

We are running an Apache Kafka 1.1.0 cluster with 5 brokers.
Since the machines are managed via Ansible, for us, the easiest way to update, would be rebuild the brokers one by one with the new version.
The main question is, can some brokers with 1.1.0 and some brokers with 2.3.0 coexist in the same cluster at the same time?
Although it is not the best practise, you can have brokers with different version in the same cluster. You'd have to configure inter.broker.protocol.version accordingly:
Specify which version of the inter-broker protocol will be used. This
is typically bumped after all brokers were upgraded to a new version.
Example of some valid values are: 0.8.0, 0.8.1, 0.8.1.1, 0.8.2,
0.8.2.0, 0.8.2.1, 0.9.0.0, 0.9.0.1 Check ApiVersion for the full list.
However, if the older and latest versions have a huge gap in-between, you might end up with compatibility (or other) issues.
"Upgrading from previous versions" section in Kafka docs, should shed some more light.

Kafka broker 1.10, clients using API 0.10.2

Should we update our Scala Kafka client library dependency (currently 0.10.2) to match the Kafka version of the broker (v1.1.0) ?
The Kafka 0.10.2 Documentation mentions
Starting with version 0.10.2, Java clients (producer and consumer)
have acquired the ability to communicate with older brokers. Version
0.10.2 clients can talk to version 0.10.0 or newer brokers
Are there any adverse effects when the client API version lags behind the server version? More importantly, can we safely update our Kafka client API library from 0.10.2 to 1.10?
While the brokers are now compatible with older clients, there are a few drawbacks in using older clients.
The main one is Message conversion. Between 1.1 and 0.10.2, the record format has changed. So, by default, older clients will force brokers to convert messages when producing and consuming. Conversion is pretty memory intensive and has a performance cost. See http://kafka.apache.org/documentation/#upgrade_11_message_format
Then obviously old clients are unable to use new features. Between 0.10.2 and 1.1, there's a ton a nice features like Exactly Once semantics, better authentication feedback on failure, Admin operations, etc