NoClassDefFoundError while running kafka connect for mongodb in standalone mode - mongodb

I am trying to set up debezium connector with MongoDB in a standalone mode on my local machine by following this.
I have set up a replica set of MongoDB with 3 nodes, 1 master 2 replica (host = localhost, port = 27017, 27018, 27019) . I have started kafka and zookeper on the local machine as a quickstart guide.
After this, I downloaded the MongoDB connector plugin jar from here.
Set plugin.path variable to mongodb plugin jars as :
plugin.path=dbz_connector_mongodb_jar_path,KAFKA_HOME/libs
created connector config for standalone mode.
KAFKA_HOME/config/my_mongo_connector.properties :
{
"name": "my-mongo-connector",
"config": {
"connector.class": "io.debezium.connector.mongodb.MongoDbConnector",
"mongodb.hosts": "rc0/127.0.0.1:27017",
"mongodb.name": "myMongoConnceter",
"collection.include.list": "dbname.collectionName"
}
}
Now when I run kafka connect with below command :
cd KAFKA_HOME
bin/connect-standalone.sh config/connect-standalone.properties config/my_mongo_connector.properties
I see below error :
java.lang.NoClassDefFoundError: com/mongodb/MongoException
at java.base/java.lang.Class.getDeclaredConstructors0(Native Method)
at java.base/java.lang.Class.privateGetDeclaredConstructors(Class.java:3137)
at java.base/java.lang.Class.getConstructor0(Class.java:3342)
at java.base/java.lang.Class.newInstance(Class.java:556)
at org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader.versionFor(DelegatingClassLoader.java:392)
at org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader.getPluginDesc(DelegatingClassLoader.java:362)
at org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader.scanPluginPath(DelegatingClassLoader.java:334)
at org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader.scanUrlsAndAddPlugins(DelegatingClassLoader.java:268)
at org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader.registerPlugin(DelegatingClassLoader.java:260)
at org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader.initPluginLoader(DelegatingClassLoader.java:229)
at org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader.initLoaders(DelegatingClassLoader.java:206)
at org.apache.kafka.connect.runtime.isolation.Plugins.<init>(Plugins.java:61)
at org.apache.kafka.connect.cli.ConnectStandalone.main(ConnectStandalone.java:78)
Caused by: java.lang.ClassNotFoundException: com.mongodb.MongoException
at java.base/java.net.URLClassLoader.findClass(URLClassLoader.java:471)
at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:588)
at org.apache.kafka.connect.runtime.isolation.PluginClassLoader.loadClass(PluginClassLoader.java:104)
at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:521)
... 13 more
Then I added mongo-java-driver-3.12.10.jar file to the kafka connect plugins folder which has this class com.mongodb.MongoException . But still, I face the same error.
I have also found that this jar is loading while running kafka connect command :
INFO Loading plugin from: debezium-connector-mongodb_path/mongo-java-driver-3.12.10.jar (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:246)
INFO Registered loader: PluginClassLoader{pluginLocation=file:debezium-connector-mongodb_path/mongo-java-driver-3.12.10.jar} (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:269)
EDIT :
My problem of NoClassDefFoundError was solved by correcting plugin.path.
Earlier it was /opt/debezium-connector-mongodb and I changed it to /opt , as suggested in the comment.
My another mistake : I was taking connector propeties file in json format, so changed it into properties file format :
name:my-mongo-connector
connector.class:io.debezium.connector.mongodb.MongoDbConnector
mongodb.hosts=rc0/127.0.0.1:27017
mongodb.name=myMongoConnector
collection.include.list=dbname.collectionName
topic.creation.default.replication.factor=1
topic.creation.default.partitions=2
mongodb.members.auto.discover=false
Now kafka connect started but on insertion of new entry in mongo collection, kafka topic is not creating.
So now the problem is unable to create kafka topic even after data insertion in DB.

Related

debezium SERVER app could not access file "decoderbufs" using postgres 11

I'm trying to setup a debezium server with Postgres and AWS Kinesis using the following instructions: https://debezium.io/documentation/reference/stable/operations/debezium-server.html
and facing with the issue while executing sh run.sh:
io.debezium.DebeziumException: Creation of replication slot failed
at io.debezium.connector.postgresql.PostgresConnectorTask.start(PostgresConnectorTask.java:143)
at io.debezium.connector.common.BaseSourceTask.start(BaseSourceTask.java:130)
at io.debezium.embedded.EmbeddedEngine.run(EmbeddedEngine.java:759)
at io.debezium.embedded.ConvertingEngineBuilder$2.run(ConvertingEngineBuilder.java:188)
at io.debezium.server.DebeziumServer.lambda$start$1(DebeziumServer.java:147)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:830)
Caused by: org.postgresql.util.PSQLException: ERROR: could not access file "decoderbufs": No such file or directory
I found some solution to add plugin.name=pgoutput property inside configuration file for postgres connector, but it works only for Debezium Connector, not for Debezium Server app.
My Debezium Server application.properties file:
debezium.sink.type=kinesis
debezium.sink.kinesis.region=us-east-1
debezium.source.connector.class=io.debezium.connector.postgresql.PostgresConnector
debezium.source.offset.storage.file.filename=data/offsets.dat
debezium.source.offset.flush.interval.ms=0
debezium.source.database.hostname=localhost
debezium.source.database.port=5432
debezium.source.database.user=postgres
debezium.source.database.password=postgres
debezium.source.database.dbname=dbzm
debezium.source.database.server.name=debezium_cdc
debezium.source.schema.include.list=business_view
debezium.source.table.include.list=inventory
quarkus.log.console.json=false
debezium.snapshot.new.tables=parallel
This variant of the property is working fine with Debezium Server:
debezium.source.plugin.name=pgoutput

Kafka Sink: ERROR Stopping due to error (org.apache.kafka.connect.cli.ConnectStandalone:130)

Am trying to stream data from one stream file to another file. It was working earlier and suddenly it providing the error as ERROR Stopping due to error (org.apache.kafka.connect.cli.ConnectStandalone:130). Have restarted the zookeeper, kafka-server, schema-registry, source and sink connectors, but still am facing same issue and unable to resolve it. Any suggestion would be helpful.
Source connector:
name=local-file-source
connector.class=FileStreamSource
tasks.max=1
file=/home/jimmacaulay/Desktop/ETL/Kafka/confluent-5.5.1/data/data/Jim_Source.csv
topic=Jim
Sink connector:
name=local-file-sink
connector.class=FileStreamSink
tasks.max=1
file=/home/jimmacaulay/Desktop/ETL/Kafka/confluent-5.5.1/data/data/Jim_Sink.csv
topics=Jim
Error:
[2020-08-18 06:25:50,482] ERROR Stopping due to error (org.apache.kafka.connect.cli.ConnectStandalone:130)
org.apache.kafka.connect.errors.ConnectException: Unable to initialize REST server
at org.apache.kafka.connect.runtime.rest.RestServer.initializeServer(RestServer.java:217)
at org.apache.kafka.connect.cli.ConnectStandalone.main(ConnectStandalone.java:87)
Caused by: java.io.IOException: Failed to bind to 0.0.0.0/0.0.0.0:8083
at org.eclipse.jetty.server.ServerConnector.openAcceptChannel(ServerConnector.java:346)
at org.eclipse.jetty.server.ServerConnector.open(ServerConnector.java:307)
at org.eclipse.jetty.server.AbstractNetworkConnector.doStart(AbstractNetworkConnector.java:80)
at org.eclipse.jetty.server.ServerConnector.doStart(ServerConnector.java:231)
at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:72)
at org.eclipse.jetty.server.Server.doStart(Server.java:385)
at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:72)
at org.apache.kafka.connect.runtime.rest.RestServer.initializeServer(RestServer.java:215)
... 1 more
Caused by: java.net.BindException: Address already in use
at sun.nio.ch.Net.bind0(Native Method)
at sun.nio.ch.Net.bind(Net.java:433)
at sun.nio.ch.Net.bind(Net.java:425)
at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:220)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:85)
at org.eclipse.jetty.server.ServerConnector.openAcceptChannel(ServerConnector.java:342)
... 8 more
Resolved the error by starting the connect-standalone using source and sink properties together.
sh connect-standalone ../config/connect-avro-standalone.properties ../config/connect-file-source_Topic_Jim.properties ../config/connect-file-sink_Topic_Jim.properties
Earlier i was starting it separately as below,
sh connect-standalone ../config/connect-avro-standalone.properties ../config/connect-file-source_Topic_Jim.properties
sh connect-standalone ../config/connect-avro-standalone.properties ../config/connect-file-sink_Topic_Jim.properties
Cause of the issue,
When am starting separately connect-standalone is getting started first for source properties using the port number 8083. Again when am starting the sink properties, it tries to use same port number and failes.
Solutions,
Both source and sink properties should be while starting connect-standalone which shares the same port.
Or define the different port numbers in the properties file and start it separately

While I'm trying to run apache atlas. I'm facing some hbase error(I'm using embedded hbase and solr)

My apache atlas server is started but I found errors in my application.log file.
ui for apache atlas is also not running.
I've followed each and every step from apache website. All went good.
I gave all permissions in atlas-env.sh and application-properties files.
can anyone help me to how to figure it out?
Running setup per configuration atlas.server.run.setup.on.start. (SetupSteps$SetupRequired:186)
2019-10-25 12:25:49,366 WARN - [main:] ~ Running setup per configuration atlas.server.run.setup.on.start. (SetupSteps$SetupRequired:186)
2019-10-25 12:25:50,104 WARN - [main:] ~ Retrieve cluster id failed (ConnectionImplementation:551)
java.util.concurrent.ExecutionException: org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = NoNode for /hbase/hbaseid
at java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357)
at java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1895)
at org.apache.hadoop.hbase.client.ConnectionImplementation.retrieveClusterId(ConnectionImplementation.java:549)
at org.apache.hadoop.hbase.client.ConnectionImplementation.<init>(ConnectionImplementation.java:287)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:219)
at org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:114)
at org.janusgraph.diskstorage.hbase2.HBaseCompat2_0.createConnection(HBaseCompat2_0.java:46)
at org.janusgraph.diskstorage.hbase2.HBaseStoreManager.<init>(HBaseStoreManager.java:314)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.janusgraph.util.system.ConfigurationUtil.instantiate(ConfigurationUtil.java:58)
at org.janusgraph.diskstorage.Backend.getImplementationClass(Backend.java:476)
at org.janusgraph.diskstorage.Backend.getStorageManager(Backend.java:408)
at org.janusgraph.graphdb.configuration.GraphDatabaseC
When HBase starts, HBase Master node creates the node "/hbase/hbaseid" in zookeeper.
1. Check the processes.
check HBase and zookeeper are running or not with 'jps -m'.
If you configured HBase manages zookeeper internally, you can not see the zookeeper process with jps command then you can check its port with 'netstat -nt | grep ZK_PORT' and normally it uses 2181.
netstat -nt | grep 2181
2. Check the zookeeper node
If you run zookeeper cluster independently, you can check the node "/hbase/hbaseid" with the zookeeper CLI like this.
ZOOKEEPER/bin/zkCli.sh
[zk: ...] ls /
[zk: ...] get /hbase/hbaseid
I hope this could help you.
install atlas
you can download the source code of atlas v2.0.0 or master branch from here and build it.
$ export MAVEN_OPTS="-Xms2g -Xmx2g"
$ mvn clean install
$ mvn clean package -Pdist
If you build the master branch, you can find the server package from /SOURCE_CODE/distro/target/apache-atlas-3.0.0-SNAPSHOT-server/apache-atlas-3.0.0-SNAPSHOT.
You should configure the server prior to run it.
Here are the minimum settings. Please find the atlas-application.properties file in conf directory.
atlas.graph.storage.hostname=xxx.xxx.xxx.xxx:xxxx => zookeeper addr and port for hbase
atlas.graph.index.search.backend=[solr or elasticsearch] => choose one you want to use.
atlas.graph.index.hostname=xxx.xxx.xxx.xxx => solr or elasticsearch server's addr
atlas.kafka.zookeeper.connect=xxx.xxx.xxx.xxx:xxxx => zookeeper addr and port for Kafka
atlas.kafka.bootstrap.servers=xxx.xxx.xxx.xxx:xxxx => kafka addr
atlas.audit.hbase.zookeeper.quorum=xxx.xxx.xxx.xxx:xxxx => zookeeper addr and port for hbase
To run the server,
$ bin/atlas_start.py
install zookeeper
Actually, to install zookeeper, there is almost nothing to do.
just follow the steps
In this case, you should change your hbase env. in hbase-env.sh
export HBASE_MANAGES_ZK=false
If you see some warnings from hbase log file like 'Could not start ZK at requested port of 2181.' then please check the hbase-site.xml file and set hbase.cluster.distributed to true.

Error trying to start zookeeper server- Confluent setup

I am trying to setup Confluent-4.1.1 on Ubuntu 16.04. To start the ZooKeeper server, I ran ./bin/zookeeper-server-start ./etc/kafka/zookeeper.properties.txt from the root directory of Confluent by following this tutorial.
The error that comes up is-
log4j:ERROR Could not read configuration file from URL [file:./bin/../config/log4j.properties].
java.io.FileNotFoundException: ./bin/../config/log4j.properties (No such file or directory)
at java.io.FileInputStream.open(Native Method)
at java.io.FileInputStream.<init>(FileInputStream.java:146)
at java.io.FileInputStream.<init>(FileInputStream.java:101)
at sun.net.www.protocol.file.FileURLConnection.connect(FileURLConnection.java:90)
at sun.net.www.protocol.file.FileURLConnection.getInputStream(FileURLConnection.java:188)
at org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:557)
at org.apache.log4j.helpers.OptionConverter.selectAndConfigure(OptionConverter.java:526)
at org.apache.log4j.LogManager.<clinit>(LogManager.java:127)
at org.slf4j.impl.Log4jLoggerFactory.<init>(Log4jLoggerFactory.java:66)
at org.slf4j.impl.StaticLoggerBinder.<init>(StaticLoggerBinder.java:72)
at org.slf4j.impl.StaticLoggerBinder.<clinit>(StaticLoggerBinder.java:45)
at org.slf4j.LoggerFactory.bind(LoggerFactory.java:150)
at org.slf4j.LoggerFactory.performInitialization(LoggerFactory.java:124)
at org.slf4j.LoggerFactory.getILoggerFactory(LoggerFactory.java:412)
at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:357)
at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:383)
at org.apache.zookeeper.server.quorum.QuorumPeerMain.<clinit>(QuorumPeerMain.java:64)
log4j:ERROR Ignoring configuration file [file:./bin/../config/log4j.properties].
log4j:WARN No appenders could be found for logger (org.apache.zookeeper.server.quorum.QuorumPeerConfig).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
I am new to kafka, and I have no clue what this means. Any help in resolving this would be appreciated.
The link you're following is only Apache Kafka, not Confluent, though they should work similarly at least for starting Zookeeper.
If you've downloaded the Confluent distribution, though, and want a single node cluster, you can use the Confluent CLI
To start Zookeeper, Kafka, and the rest of the Confluent Platform, run
./bin/confluent start
Otherwise, the Zookeeper startup script doesn't use a txt file, and it might be unable to detect where you've extracted the tarball, so instead you can use apt like a normal software package
https://docs.confluent.io/current/installation/installing_cp/deb-ubuntu.html
According to the documentation in the link
1:run these commands after changing path-to-confluent with your path
export CONFLUENT_HOME=
export PATH="${CONFLUENT_HOME}/bin:$PATH"
(these commands will make the "confluent" command recognizable from
terminal)
2: run below command
confluent local services start
(this command will start all the services including zookeeper ,
kafka, schema registry etc)

Kafka Connect failed to start

I installed kafka confluent oss 4.0 on a fresh linux centos 7 but kafka connect failed to start.
Steps to reproduce :
- Install Oracle JDK 8
- Copy confluent-4.0.0 folder on opt/confluent-4.0.0
- Run /opt/confluent-4.0.0/confluent start
Result :
Starting zookeeper
zookeeper is [UP]
Starting kafka
kafka is [UP]
Starting schema-registry
schema-registry is [UP]
Starting kafka-rest
kafka-rest is [UP]
Starting connect
\Kafka Connect failed to start
connect is [DOWN]
Error Log (connect.stderr) :
Exception in thread "main" java.lang.NoClassDefFoundError: io/confluent/connect/storage/StorageSinkConnectorConfig
at java.lang.ClassLoader.defineClass1(Native Method)
at java.lang.ClassLoader.defineClass(ClassLoader.java:763)
at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
at java.net.URLClassLoader.defineClass(URLClassLoader.java:467)
at java.net.URLClassLoader.access$100(URLClassLoader.java:73)
at java.net.URLClassLoader$1.run(URLClassLoader.java:368)
at java.net.URLClassLoader$1.run(URLClassLoader.java:362)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:361)
at org.apache.kafka.connect.runtime.isolation.PluginClassLoader.loadClass(PluginClassLoader.java:54)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at java.lang.Class.getDeclaredConstructors0(Native Method)
at java.lang.Class.privateGetDeclaredConstructors(Class.java:2671)
at java.lang.Class.getConstructor0(Class.java:3075)
at java.lang.Class.newInstance(Class.java:412)
at org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader.getPluginDesc(DelegatingClassLoader.java:279)
at org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader.scanPluginPath(DelegatingClassLoader.java:260)
at org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader.scanUrlsAndAddPlugins(DelegatingClassLoader.java:201)
at org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader.registerPlugin(DelegatingClassLoader.java:193)
at org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader.initLoaders(DelegatingClassLoader.java:153)
at org.apache.kafka.connect.runtime.isolation.Plugins.<init>(Plugins.java:47)
at org.apache.kafka.connect.cli.ConnectDistributed.main(ConnectDistributed.java:70)
Caused by: java.lang.ClassNotFoundException: io.confluent.connect.storage.StorageSinkConnectorConfig
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at org.apache.kafka.connect.runtime.isolation.PluginClassLoader.loadClass(PluginClassLoader.java:62)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
... 22 more
Additional informations :
Java version :
java version "1.8.0_151"
Java(TM) SE Runtime Environment (build 1.8.0_151-b12)
Java HotSpot(TM) 64-Bit Server VM (build 25.151-b12, mixed mode
Centos version :
centos-release-7-4.1708.el7.centos.x86_64
[Edit : 30/11/2017]
Editing plugin.path variables in every properties files didn't fix the problem.
List of files containing 'plugin.path' variable :
./etc/schema-registry/connect-avro-distributed.properties:84:plugin.path=/opt/confluent-4.0.0/share/java
./etc/schema-registry/connect-avro-standalone.properties:51:plugin.path=/opt/confluent-4.0.0/share/java
./etc/kafka/connect-distributed.properties:95:plugin.path=/opt/confluent-4.0.0/share/java
./etc/kafka/connect-standalone.properties:50:plugin.path=/opt/confluent-4.0.0/share/java
With Confluent 4.0.0, classloading isolation with plugin.path is enabled by default for Kafka Connect.
When you install Confluent Platform from deb or rpm packages the default location of your plugin.path is known beforehand.
However, when you download and extract the zip or tar.gz archive of Confluent Platform somewhere in your filesystem, it's set to:
plugin.path=share/java
This is a relative path, because when you download Confluent Platform as an archive (zip or tar.gz), the location where you extract the archive is not known (in your example above it's /opt/confluent-4.0.0/).
The CLI or Connect's bin scripts will be able to guess this location if you run it from the directory where you extracted Confluent platform:
For instance, in the example above:
cd /opt/confluent-4.0.0
./bin/confluent start
In order for you to be able to start Connect from any directory within your filesystem, given that the bin directory for Confluent Platform is in your PATH, you will need to set the property plugin.path to the absolute path location of your plugins:
To use Confluent CLI edit:
etc/schema-registry/connect-avro-distributed.properties
and set your plugin.path appropriately (here: plugin.path=/opt/confluent-4.0.0/share/java)
For the regular bin scripts edit:
./etc/kafka/connect-distributed.properties
and
./etc/kafka/connect-standalone.properties
and set your plugin.path as above (again, in your example: plugin.path=/opt/confluent-4.0.0/share/java).
Given that you used the tar installation (rather than the Docker image approach). Long story short, you need to be inside of the Confluent distribution, in my example, confluent-6.1.0
As shown in the screenshot, when you run the command confluent local services start in the root directory, Connect failed. And anything after that failure (e.g. ksqlDB, Control Center etc.) didn't even get a chance to start.
When you run the same command inside of confluent-6.1.0, everything worked out.