Jhipster 5.7 generates a kafka.yml file referencing kafka 1.0.0.
Is Kafka 2.x well supported (and already tested) by jhipster. If yes, is the move to kafka 2.x planned ?
Thanks
Related
we have production Kafka cluster with 2.7 version , 5 Kafka brokers on RHEL 7.9 version
we want to upgrade the Kafka version to 3.X version
3.X version not include zookeeper , so we are wondering if we can do upgrade without any data loss
regarding to kafka 2.7 version , Kafka storing the metadata on zookeeper servers ( as brokers ids , topics names etc )
but is it possible to perform rolling upgrade from 2.7 to 3.x version without any data loss?
The upgrade guide should contain all infos you need.
While KRaft mode (without ZooKeeper) is production ready since 3.3, they still keep ZooKeeper around for compatibility until the 4.0 release.
Furthermore If I understand correctly, it is currently only possible to set up a fresh cluster in KRaft mode, but not to migrate an existing one with ZooKeeper. Kafka 3.5 will be a migration version they intend you to migrate from ZooKeeper to KRaft.
This is explained quite nicely in the release notes from Kafka, especially for Kafka 3.3 and the release video
As long as your Kafka brokers are not running with Java 8 still, you can simply do a rolling upgrade from 2.7 to 3.X like you are used to.
What kind of benefits do you think upgrading the kafka version will provide and is it important? What are the advantages of switching from 2.8 to 3.1?
The Kafka release notes cover all added features and fixes. For example, as of Kafka 3.3.1 release, Zookeeper is no longer required.
Kafka 3.0 removed support for Scala 2.11, and maybe Java 7/8, and enables transactions by default, I believe.
Make sure that you read the upgrade notes for protocol changes that will make it impossible to downgrade in the event of errors.
Spark-Cassandra experts: Will Apache Spark 1.4 work with Apache Cassandra 3.0 in Datastax installations?. We are considering several options for migrating DSE 4.8 (Spark 1.4 and Cassandra 2.1) to DSE 5.0 (Spark 1.6 and Cassandra 3.0). One option is to update Cassandra Cluster to DSE 5.0 and leave Spark cluster on DSE 4.8. This means we have to make Apache Spark 1.4 work with Apache Cassandra 3.0. We use https://github.com/datastax/spark-cassandra-connector versions 1.4 (DSE 4.8) and 1.6(DSE 5.0). Has someone tried using Spark 1.4 (DSE 4.8) with Cassandra 3.0 (DSE 5.0) ?.
As I can see from the Maven Central, Spark Cassandra Connector 1.4.5 did use the version 2.1.7 of the Java driver. According the compatibility matrix in official documentation, the driver 2.1.x won't work with Cassandra 3.0... You can of course test it, but I doubt that it will work - driver is usually backward compatible, but not forward compatible...
I recommend to perform migration to DSE 5.0, and then move to 5.1 fast enough, as 5.0 could be EOL soon.
P.S. If you have more questions, I recommend to join the DataStax Academy Slack - there is a separate channel about spark cassandra connector there.
I need to use confluent schema-registry to connect to 0.9 kafka. But for schema-registry 2.0.1 version I didn't see SSL configuration available. Is there a way to enable ssl for schema-registry and kafka-rest to talk to 0.9 kafka?
That functionality was only added as of the 3.0 release line. If you want support for 0.9/2.0 releases, you could try either a) taking a 3.0 release and reducing the Kafka dependency to 0.9 versions (this might work, but I think there were other unrelated changes that might require additional changes), or b) backport the SSL changes to a 2.0 version (although I can't be sure off the top of my head whether this version supports everything required to enable security support).
Note that if you follow the upgrade docs, upgrading brokers to enable use of newer client versions is generally relatively painless these days.
I am using the KafkaConsumer82 and connector jar version is 0.10.2 and kafka version is 0.9.1 and flink version is 1.0.0.
The Java consumer works fine when I run it from with in an IDE as a standalone main program. But when I run it from flink run, then I don't see the messages being consumed and don't see any log in stdout of the JobManager in 1ocalhost:8081. Please let me know what might be the issue.
As a first step I would suggest getting the versions in sync. If you're using Kafka 0.9 and Flink 1.0.0 I would suggest using flink-connector-kafka-0.9 version 1.0.0 which contains FlinkKafkaConsumer09.