after log4j upgradation kafka unable to start - apache-kafka

I have recently updated the logging jar of our application from log4j-1.2.17.jar to the latest log4j-1.2-api-2.18.0.jar.After configuring the latest .my kafka server and zookeeper server unable to start
log4j-1.2-api-2.16.0.jar
log4j-api-2.16.0.jar
log4j-core-2.16.0.jar
log4j-slf4j-impl-2.16.0.jar
slf4j-api-1.7.30.jar
how to resolve this issue after update the log4j

You cannot just upgrade JARs and hope things will work. Instead, upgrade all of Kafka and Zookeeper, as I believe they both use reload4j now.
https://issues.apache.org/jira/browse/KAFKA-13660
https://issues.apache.org/jira/browse/ZOOKEEPER-4626

Related

Want to upgrade the zookeeper from 3.4.14 to 3.5.6 /latest

Want to upgrade the zookeeper from 3.4.14 to recent/3.5.6. I have followed the link for upgrade and downloaded the zookeeper jar.
but still on restarting server, it is failing in loading the data.
Tried with snapshot.trust.empty=true flag in configuration but in this case it is not able to load the previous data.
Worked by adding snapshot.0 file in version directory in zookeeper data directory.

Unable to upgrade from Kafka 0.8.2.2 to 0.10.0.0

Trying upgrade from Kafka 0.8.x to Kafka 0.10.x
I did everything what the Kafka documents tell
(http://kafka.apache.org/documentation.html#upgrade_10)
But I still don't see the Kafka being upgraded. I did:
Insert "inter.broker.protocol.version" and "log.message.format.version" into server.properties
Stop broker -> update "inter.broker.protocol.version" to have current Kafka version -> start broker. Repeated this on all the brokers in the cluster.
Update protocol version to be "0.10.0.0" and did a restart on all the brokers.
I dont think Kafka upgrade ever happened.
I still find the old jars and I dont find the new bash scripts that are to be available when you upgrade to 0.10.0.0
Not sure if I did it right.
Any help would be appreciated.
It looks like you skipped Step 2:
Upgrade the brokers. This can be done a broker at a time by simply bringing it down, updating the code, and restarting it.
After adding inter.broker.protocol.version and log.message.format.version to the config, for each broker, you need to stop it, put the new JARs (for 1.0) and restart it.
You can get the latest 1.0 JAR from http://kafka.apache.org/downloads#1.0.2

Kafka manager configuration issue in kafka cluster

I was trying to install and configure kafka manager in my kafka cluster but facing issue while building kafka manager binary as below.
./sbt clean dist.
Server is not connected with internet so not able to download required binary and handing with error:
getting scala version x.x.x
Kindly help to install and configure kafka manager offline.
Thanks
You can run sbt in offline mode by setting below parameter:
$ sbt "set offline := true" run
And make sure you have all the required dependencies and components in the local .ivy cache .ivy2/cache in order to build the project offline.

kafka mongodb sink connector not starting

I've installed confluent_3.3.0 and started zookeper, schema-registry and kafka broker. I have also downloaded mongodb connector from this link.
Description: I'm running sink connector using the following command:
./bin/connect-standalone etc/kafka/connect-standalone.properties /home/username/mongo-connect-test/kafka-connect-mongodb/quickstart-couchbase-sink.properties
Problem: I'm getting the following error:
ERROR Stopping after connector error (org.apache.kafka.connect.cli.ConnectStandalone:91)
java.lang.IllegalAccessError: tried to access field org.apache.kafka.common.config.ConfigDef.NO_DEFAULT_VALUE from class org.radarcns.mongodb.MongoDbSinkConnector
Thanks for reading !
This connector is using, at its latest version, an old version of the kafka-clients API. Specifically, it is depending on a constructor of the class org.apache.kafka.common.config.AbstractConfig that does not exist in Apache Kafka versions >= 0.11.0.0
Confluent Platform version 3.3.0 is using Apache Kafka 0.11.0.0
To fix this issue, the recommended approach would be to update the connector code to use the most recent versions of Apache Kafka APIs.

Not able to run Hazelcast in AWS enabled mode

I am getting following error on starting hazelcast server using server.sh in all the versions 3.1.7, 3.2.6, 3.3.3
Error while creating AWSJoiner!
java.lang.ClassNotFoundException: com.hazelcast.cluster.TcpIpJoinerOverAWS
Multicast and tcp-ip are working fine
Hazelcast-all and all other jars are included in the lib.
Did you include 'hazelcast-cloud' jar. This is needed to use AWS discovery.