I am getting following error on starting hazelcast server using server.sh in all the versions 3.1.7, 3.2.6, 3.3.3
Error while creating AWSJoiner!
java.lang.ClassNotFoundException: com.hazelcast.cluster.TcpIpJoinerOverAWS
Multicast and tcp-ip are working fine
Hazelcast-all and all other jars are included in the lib.
Did you include 'hazelcast-cloud' jar. This is needed to use AWS discovery.
Related
I have recently updated the logging jar of our application from log4j-1.2.17.jar to the latest log4j-1.2-api-2.18.0.jar.After configuring the latest .my kafka server and zookeeper server unable to start
log4j-1.2-api-2.16.0.jar
log4j-api-2.16.0.jar
log4j-core-2.16.0.jar
log4j-slf4j-impl-2.16.0.jar
slf4j-api-1.7.30.jar
how to resolve this issue after update the log4j
You cannot just upgrade JARs and hope things will work. Instead, upgrade all of Kafka and Zookeeper, as I believe they both use reload4j now.
https://issues.apache.org/jira/browse/KAFKA-13660
https://issues.apache.org/jira/browse/ZOOKEEPER-4626
I'm trying to install MQ source & sink connectors for our confluent cloud. I've done this for on-prem apache kafka but doing the same of cloud seems to be different. Following the confluent documents says I need to have a platform installed on my local, which I did, and then to run a confluent-hub install which does install the connector on my local and then use the json for distributed instance.
My problem is when I run the json, it says the class for mq was not found, I tried to point the CLASSPATH to the dir where the jars are but still get the same error. How do I run this successfully?
--
ERROR Uncaught exception in REST call to /connectors (org.apache.kafka.connect.runtime.rest.errors.ConnectExceptionMapper:61)
org.apache.kafka.connect.errors.ConnectException: Failed to find any class that implements Connector and which name matches io.confluent.connect.ibm.mq.IbmMQSourceConnector,
Also want to understand how installing connector on local would apply to my cloud cluster? Not sure what I'm missing!
Confluent Cloud doesn't support custom connector installation, last I checked. They need to explicitly support and offer it.
I assume you're reading some documentation that indicates you need to run your own Connect cluster (not necessarily locally), where you have full control over the connectors that are installed
I'm trying to use Hazelcast with Wildfly.
Following the instructions provided in the Hazelcast website I could start a cluster using the hazelcast-jca and hazelcast-jca-rar.
What I don't know, is where do I configure the distributed maps?
You need to configure the maps in hazelcast.xml and put the configuration into your classpath.
http://docs.hazelcast.org/docs/latest/manual/html-single/index.html#map
Here you can find some code samples for Hazelcast Resource Adapter:
https://github.com/hazelcast/hazelcast-code-samples/tree/master/hazelcast-integration/jca-ra
Documentation for Hazelcast Resource Adapter as you already know:
http://docs.hazelcast.org/docs/latest/manual/html-single/index.html#integrating-into-j2ee
I would like to understand the development life-cycle for Kafka connect modules.
Following http://kafka.apache.org/090/documentation.html#connect, I would like to write a custom Kafka connect module using the "connect api" but I dont know where to start. Are there any minimal examples of how to do this? Project setup etc?
Incidentally I built this one https://github.com/confluentinc/kafka-connect-jdbc and tried to run it (on google cloud) but i find errors - clearly a missing dependency but I dont know what to add. Of course it might be that this is only supposed to run on confluent platform. If it could run elsewhere then great. But if it cant, I would like to know how to build one from scratch hence my question.
java.lang.NoClassDefFoundError: org/apache/kafka/common/config/ConfigDef$Recommender
at io.confluent.connect.jdbc.JdbcSourceConnector.start(JdbcSourceConnector.java:66)
at org.apache.kafka.connect.runtime.Worker.addConnector(Worker.java:186)
at org.apache.kafka.connect.runtime.standalone.StandaloneHerder.startConnector(StandaloneHerder.java:197)
at org.apache.kafka.connect.runtime.standalone.StandaloneHerder.putConnectorConfig(StandaloneHerder.java:145)
at org.apache.kafka.connect.cli.ConnectStandalone.main(ConnectStandalone.java:85)
Caused by: java.lang.ClassNotFoundException: org.apache.kafka.common.config.ConfigDef$Recommender
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
... 5 more
The most common cause of this kind of errors is the configuration of your CLASSPATH. For Kafka Connect to find your classes on runtime you need to include them on your classpath. The following text is taken directly from the documentation of Kafka connect :
All that is required to install a new plugin is to place it in the CLASSPATH of the Kafka Connect process. All the scripts for running Kafka Connect will use the CLASSPATH environment variable if it is set when they are invoked, making it easy to run with additional connector plugins:
And how to do it:
export CLASSPATH=/path/to/my/connectors/*
bin/connect-standalone standalone.properties new-custom-connector.properties
I have also written a how to guide for Kafka Connect that you might find helpful.
Update the kafka-clients.jar to kafka-clients-0.10.0.0.jar. The old version doesn't include this class: org/apache/kafka/common/config/ConfigDef$Recommender.class
You can download it here:
http://central.maven.org/maven2/org/apache/kafka/kafka-clients/0.10.0.0/kafka-clients-0.10.0.0.jar
Would have added this as a comment, but SO indicated that I had insufficient points. Anyway, the point of this answer is to demonstrate that the JDBC Connector can run without installing the entire Confluent package and schema registry.
I was able to get Confluent's JDBC connector running with without installing the Confluent platform (specifically, the schema registry). There are four Confluent libraries that you need in the classpath when you run the connector (common-config, common-metrics, common-utils and kafka-connect). For more detailed instructions, please see https://prefrontaldump.wordpress.com/2016/05/02/using-confluents-jdbc-connector-without-installing-the-entire-platform/
It gives me the following exception message as I am trying to deploy an application.
BUILD FAILED
C:\eclipse\workspace\SLGIADMIN\build.xml:14: The following error occurred while executing this line:
C:\eclipse\buildcommon.xml:243: weblogic.Deployer$DeployerException: weblogic.deploy.api.tools.deployer.DeployerException: Unable to connect to 't3://localhost:7001': invalid type code: 31. Ensure the url represents a running admin server and that the credentials are correct. If using http protocol, tunneling must be enabled on the admin server.
I am using ant build in Eclipse and deploying to Weblogic 9.2.
I have been trying to find solution online. I have tried enable the tunneling in the Weblogic console. Doesn't work.
I have seen people mention we can use JDK 1.5 which will turn on tunneling on deploying. I have verified mine version is 1.5 too. I have ensured that too but did not fix the issue.
Invalid type code 31 is always because you're connecting with a different version of Java than what the server is running. Weblogic 9.2 only supports Java 1.5. Make sure that's what your ant task is using. You can also connect with later versions of Java if you set the following property in your client:
-Dsun.lang.ClassLoader.allowArraySyntax=true