ClassNotFound exception running Kafka Connect S3 Source Connector - apache-kafka

I am evaluating Confluent Kafka S2 Source Connector and stuck with the issues with following stacktrace:
[2020-12-22 15:27:41,636] ERROR WorkerConnector{id=s3-source-connector} Error while starting connector (org.apache.kafka.connect.runtime.WorkerConnect
or)
org.apache.kafka.connect.errors.ConnectException: Failed to get list of folders from S3 bucket - kafka-connect for key path - topics/ and delimiter - /
at io.confluent.connect.s3.source.S3Storage.listFolders(S3Storage.java:286)
at io.confluent.connect.s3.source.S3Storage.getPartitions(S3Storage.java:98)
at io.confluent.connect.storage.partitioner.TimeBasedPartitioner.getPartitions(TimeBasedPartitioner.java:50)
at io.confluent.connect.cloud.storage.source.StorageSourceConnector.doStart(StorageSourceConnector.java:77)
at io.confluent.connect.cloud.storage.source.StorageSourceConnector.start(StorageSourceConnector.java:69)
at org.apache.kafka.connect.runtime.WorkerConnector.doStart(WorkerConnector.java:111)
at org.apache.kafka.connect.runtime.WorkerConnector.start(WorkerConnector.java:136)
at org.apache.kafka.connect.runtime.WorkerConnector.transitionTo(WorkerConnector.java:196)
at org.apache.kafka.connect.runtime.Worker.startConnector(Worker.java:242)
at org.apache.kafka.connect.runtime.distributed.DistributedHerder.startConnector(DistributedHerder.java:908)
at org.apache.kafka.connect.runtime.distributed.DistributedHerder.access$1300(DistributedHerder.java:110)
at org.apache.kafka.connect.runtime.distributed.DistributedHerder$15.call(DistributedHerder.java:924)
at org.apache.kafka.connect.runtime.distributed.DistributedHerder$15.call(DistributedHerder.java:920)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: com.amazonaws.SdkClientException: Couldn't initialize a SAX driver to create an XMLReader
at com.amazonaws.services.s3.model.transform.XmlResponsesSaxParser.<init>(XmlResponsesSaxParser.java:123)
at com.amazonaws.services.s3.model.transform.Unmarshallers$ListObjectsV2Unmarshaller.unmarshall(Unmarshallers.java:127)
at com.amazonaws.services.s3.model.transform.Unmarshallers$ListObjectsV2Unmarshaller.unmarshall(Unmarshallers.java:117)
at com.amazonaws.services.s3.internal.S3XmlResponseHandler.handle(S3XmlResponseHandler.java:62)
at com.amazonaws.services.s3.internal.S3XmlResponseHandler.handle(S3XmlResponseHandler.java:31)
at com.amazonaws.http.response.AwsResponseHandlerAdapter.handle(AwsResponseHandlerAdapter.java:69)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleResponse(AmazonHttpClient.java:1714)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleSuccessResponse(AmazonHttpClient.java:1434)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1356)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1139)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:796)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:764)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:738)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:698)
at com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:680)
at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:544)
at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:524)
at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:5052)
at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4998)
at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4992)
at com.amazonaws.services.s3.AmazonS3Client.listObjectsV2(AmazonS3Client.java:938)
at io.confluent.connect.s3.source.S3Storage.listFolders(S3Storage.java:283)
... 16 more
Caused by: org.xml.sax.SAXException: SAX2 driver class org.apache.xerces.parsers.SAXParser not found
java.lang.ClassNotFoundException: org.apache.xerces.parsers.SAXParser
at org.xml.sax.helpers.XMLReaderFactory.loadClass(XMLReaderFactory.java:230)
at org.xml.sax.helpers.XMLReaderFactory.createXMLReader(XMLReaderFactory.java:191)
at com.amazonaws.services.s3.model.transform.XmlResponsesSaxParser.<init>(XmlResponsesSaxParser.java:120)
... 37 more
Caused by: java.lang.ClassNotFoundException: org.apache.xerces.parsers.SAXParser
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at org.apache.kafka.connect.runtime.isolation.PluginClassLoader.loadClass(PluginClassLoader.java:104)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at org.xml.sax.helpers.NewInstance.newInstance(NewInstance.java:82)
at org.xml.sax.helpers.XMLReaderFactory.loadClass(XMLReaderFactory.java:228)
... 39 more
Connector config:
{
"name": "source-connector",
"config": {
"connector.class":"io.confluent.connect.s3.source.S3SourceConnector",
"s3.bucket.name":"bucket-test",
"s3.region":"us-west-2",
"tasks.max":"1",
"topics":"migration-topic",
"topics.dir":"topics/events",
"format.class":"io.confluent.connect.s3.format.json.JsonFormat",
"behavior.on.error": "log",
"partitioner.class":"io.confluent.connect.storage.partitioner.TimeBasedPartitioner",
"path.format":"'date'=YYYY-MM-dd/'hour'=HH",
"key.converter":"com.pandadoc.kafka.connect.msgpack.converter.MessagePackConverter",
"key.converter.schemas.enable":"false",
"value.converter":"com.pandadoc.kafka.connect.msgpack.converter.MessagePackConverter",
"value.converter.schemas.enable":"false",
"errors.tolerance": "all",
"errors.deadletterqueue.topic.name": "kafka-connect-dead-letter-queue",
"errors.deadletterqueue.context.headers.enable": true,
"confluent.license":"",
"confluent.topic.bootstrap.servers":"localhost:9092",
"confluent.topic.replication.factor":"3"
}
}
Versions:
[2020-12-22 15:27:41,640] INFO Kafka version: 2.2.2-cp3 (org.apache.kafka.common.utils.AppInfoParser)
[2020-12-22 15:27:41,640] INFO Kafka commitId: 602b2e2e105b4d34 (org.apache.kafka.common.utils.AppInfoParser)
It could be a JDK bug: https://bugs.openjdk.java.net/browse/JDK-8015099.
It has been fixed in JDK 9+.
Confluent docker image confluentinc/cp-kafka-connect:5.2.4 uses JDK8:
openjdk version "1.8.0_172"
OpenJDK Runtime Environment (Zulu 8.30.0.1-linux64) (build 1.8.0_172-b01)
OpenJDK 64-Bit Server VM (Zulu 8.30.0.1-linux64) (build 25.172-b01, mixed mode)
Any other ideas on what could be wrong?

I've sorted the issue out 😅
It turned out the JDK bug that caused the kind of behavior.
There is an interoperability table for Kafka Connect version and Kafka here hence there are two options:
Tweak docker Kafka Connect image by installing JDK9+
Bump up Kafka Connect to 6.x (if Kafka version allows) that uses JDK11.

Related

Error while Deploying Kafka Connect in Distributed Mode

I am trying to deploy my confluent Kafka Connect for S3 in a distributed mode. But I am encountering the following error :-
(org.eclipse.jetty.server.HttpChannel) [qtp1620643420-22]
java.lang.AbstractMethodError: javax.ws.rs.core.UriBuilder.uri(Ljava/lang/String;)Ljavax/ws/rs/core/UriBuilder;
at javax.ws.rs.core.UriBuilder.fromUri(UriBuilder.java:96)
at org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:275)
at org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:205)
at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:852)
at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:544)
at org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:233)
at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1581)
at org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:233)
at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1307)
at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:188)
at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:482)
at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1549)
at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:186)
at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1204)
at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
at org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:221)
at org.eclipse.jetty.server.handler.StatisticsHandler.handle(StatisticsHandler.java:173)
at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127)
at org.eclipse.jetty.server.Server.handle(Server.java:494)
at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:374)
at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:268)
at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:311)
at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:103)
at org.eclipse.jetty.io.ChannelEndPoint$2.run(ChannelEndPoint.java:117)
at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.runTask(EatWhatYouKill.java:336)
at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:313)
at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:171)
at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:135)
at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:782)
at org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:918)
at java.lang.Thread.run(Thread.java:748)
I can see the following version of javax.ws.rs-api-2.1.1.jar available in lib folder still it does not solve the issue. I tried importing glassfish jars but that didn't helped too.
Not sure what is the issue has anyone faced this issue can help me ?
Version which I am using
Confluent Kafka S3 Connect version - 5.5.1

MongoDB Kafka Source Connector throws java.lang.IllegalStateException: Queue full when using copy.existing: true

When importing data from mongodb to kafka using the connector, https://github.com/mongodb/mongo-kafka, it throws java.lang.IllegalStateException: Queue full.
I use the default setting of copy.existing.queue.size, which is 16000, and copy.existing: true. What value should I set? The collection size is 10G.
Environment:
mongo-kafka-connect: 1.0.0
Kafka: 2.4.0
Kafka-Connect: 2.4.0
MongoDB server: 3.6.14
mongodb-driver-sync: 3.12.1
Stacktrace:
org.apache.kafka.connect.errors.ConnectException: java.lang.IllegalStateException: Queue full\n\tat com.mongodb.kafka.connect.source.MongoCopyDataManager.poll(MongoCopyDataManager.java:95)\n\tat com.mongodb.kafka.connect.source.MongoSourceTask.getNextDocument(MongoSourceTask.java:301)\n\tat com.mongodb.kafka.connect.source.MongoSourceTask.poll(MongoSourceTask.java:154)\n\tat org.apache.kafka.connect.runtime.WorkerSourceTask.poll(WorkerSourceTask.java:265)\n\tat org.apache.kafka.connect.runtime.WorkerSourceTask.execute(WorkerSourceTask.java:232)\n\tat org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:177)\n\tat org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:227)\n\tat java.base\/java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source)\n\tat java.base\/java.util.concurrent.FutureTask.run(Unknown Source)\n\tat java.base\/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)\n\tat java.base\/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)\n\tat java.base\/java.lang.Thread.run(Unknown Source)\nCaused by: java.lang.IllegalStateException: Queue full\n\tat java.base\/java.util.AbstractQueue.add(Unknown Source)\n\tat java.base\/java.util.concurrent.ArrayBlockingQueue.add(Unknown Source)\n\tat com.mongodb.client.internal.Java8ForEachHelper.forEach(Java8ForEachHelper.java:30)\n\tat com.mongodb.client.internal.Java8AggregateIterableImpl.forEach(Java8AggregateIterableImpl.java:54)\n\tat com.mongodb.kafka.connect.source.MongoCopyDataManager.copyDataFrom(MongoCopyDataManager.java:123)\n\tat com.mongodb.kafka.connect.source.MongoCopyDataManager.lambda$new$0(MongoCopyDataManager.java:87)\n\t... 5 more
Fixed in
https://github.com/mongodb/mongo-kafka/commit/7e6bf97742f2ad75cde394d088823b86880cdf4e
and will be released after 1.0.0. So if anyone faces the same issue, please update the version to later than 1.0.0.

Kafka Connect: No suitable driver found

I am trying Kafka with Postgres Sink using JDBC-sink connector.
Exception:
INFO Unable to connect to database on attempt 1/3. Will retry in 10000 ms. (io.confluent.connect.jdbc.util.CachedConnectionProvider:91)
java.sql.SQLException: No suitable driver found for jdbc:postgresql://localhost:5432/casb
at java.sql.DriverManager.getConnection(DriverManager.java:689)
at java.sql.DriverManager.getConnection(DriverManager.java:247)
at io.confluent.connect.jdbc.util.CachedConnectionProvider.newConnection(CachedConnectionProvider.java:85)
at io.confluent.connect.jdbc.util.CachedConnectionProvider.getValidConnection(CachedConnectionProvider.java:68)
at io.confluent.connect.jdbc.sink.JdbcDbWriter.write(JdbcDbWriter.java:56)
at io.confluent.connect.jdbc.sink.JdbcSinkTask.put(JdbcSinkTask.java:69)
at org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:495)
at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:288)
at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:198)
at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:166)
at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:170)
at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:214)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Sink.properties:
name=test-sink
connector.class=io.confluent.connect.jdbc.JdbcSinkConnector
tasks.max=1
topics=fp_test
connection.url=jdbc:postgresql://localhost:5432/casb
connection.user=admin
connection.password=***
auto.create=true
I have set plugin.path=/usr/share/java/kafka-connect-jdbc
On /usr/share/java/kafka-connect-jdbc I have the following files:
kafka-connect-jdbc-4.0.0.jar , postgresql-9.4-1206-jdbc41.jar, sqlite-jdbc-3.8.11.2.jar and some other jars that basically come packaged along with confluent.
I then downloaded postgres-jdbc driver jar postgresql-42.2.2.jar, copied it in the same folder and tried again. Still the same exception.
Kindly help me out with this.
Setting plugin.path=/usr/share/java and CLASSPATH=/usr/share/java/kafka-connect-jdbc/ solved the issue.

can't load jdbc driver class PostgreSQL + NIFI

Start working with Nifi and this is my first exercise.
So trying to put a csv file in a Postgres table. I defined my data base driver as shown in the picture.
The error is:
can't load jdbc driver class
in my log file I have this message:
ERROR [StandardProcessScheduler Thread-1] o.a.n.c.s.StandardControllerServiceNode DBCPConnectionPool[id=c25f8f91-0161-1000-a496-8910832bdbd8] F$
org.apache.nifi.reporting.InitializationException: Can't load Database Driver
at org.apache.nifi.dbcp.DBCPConnectionPool.getDriverClassLoader(DBCPConnectionPool.java:249)
at org.apache.nifi.dbcp.DBCPConnectionPool.onConfigured(DBCPConnectionPool.java:198)
at sun.reflect.GeneratedMethodAccessor437.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.nifi.util.ReflectionUtils.invokeMethodsWithAnnotations(ReflectionUtils.java:137)
at org.apache.nifi.util.ReflectionUtils.invokeMethodsWithAnnotations(ReflectionUtils.java:125)
at org.apache.nifi.util.ReflectionUtils.invokeMethodsWithAnnotations(ReflectionUtils.java:70)
at org.apache.nifi.util.ReflectionUtils.invokeMethodsWithAnnotation(ReflectionUtils.java:47)
at org.apache.nifi.controller.service.StandardControllerServiceNode$2.run(StandardControllerServiceNode.java:409)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.ClassNotFoundException: org.postgresql.Driver
The postgres jdbc driver doesn't come packaged with nifi. You must
download the driver (jar) e.g. postgresql-42.2.24.jar
place it in the nifi/lib folder
restart nifi
Open your DBCPConnectionPool controller service properties
Set Database Driver Class Name to org.postgresql.Driver
Resources
https://jdbc.postgresql.org/download.html
https://docs.oracle.com/cd/E19509-01/820-3497/agqka/index.html
https://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi/nifi-dbcp-service-nar/1.14.0/org.apache.nifi.dbcp.DBCPConnectionPool/index.html
The asker found the root cause:
I insert a return in the line after the jdbc name

Kafka Connect failed to start

I installed kafka confluent oss 4.0 on a fresh linux centos 7 but kafka connect failed to start.
Steps to reproduce :
- Install Oracle JDK 8
- Copy confluent-4.0.0 folder on opt/confluent-4.0.0
- Run /opt/confluent-4.0.0/confluent start
Result :
Starting zookeeper
zookeeper is [UP]
Starting kafka
kafka is [UP]
Starting schema-registry
schema-registry is [UP]
Starting kafka-rest
kafka-rest is [UP]
Starting connect
\Kafka Connect failed to start
connect is [DOWN]
Error Log (connect.stderr) :
Exception in thread "main" java.lang.NoClassDefFoundError: io/confluent/connect/storage/StorageSinkConnectorConfig
at java.lang.ClassLoader.defineClass1(Native Method)
at java.lang.ClassLoader.defineClass(ClassLoader.java:763)
at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
at java.net.URLClassLoader.defineClass(URLClassLoader.java:467)
at java.net.URLClassLoader.access$100(URLClassLoader.java:73)
at java.net.URLClassLoader$1.run(URLClassLoader.java:368)
at java.net.URLClassLoader$1.run(URLClassLoader.java:362)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:361)
at org.apache.kafka.connect.runtime.isolation.PluginClassLoader.loadClass(PluginClassLoader.java:54)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at java.lang.Class.getDeclaredConstructors0(Native Method)
at java.lang.Class.privateGetDeclaredConstructors(Class.java:2671)
at java.lang.Class.getConstructor0(Class.java:3075)
at java.lang.Class.newInstance(Class.java:412)
at org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader.getPluginDesc(DelegatingClassLoader.java:279)
at org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader.scanPluginPath(DelegatingClassLoader.java:260)
at org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader.scanUrlsAndAddPlugins(DelegatingClassLoader.java:201)
at org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader.registerPlugin(DelegatingClassLoader.java:193)
at org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader.initLoaders(DelegatingClassLoader.java:153)
at org.apache.kafka.connect.runtime.isolation.Plugins.<init>(Plugins.java:47)
at org.apache.kafka.connect.cli.ConnectDistributed.main(ConnectDistributed.java:70)
Caused by: java.lang.ClassNotFoundException: io.confluent.connect.storage.StorageSinkConnectorConfig
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at org.apache.kafka.connect.runtime.isolation.PluginClassLoader.loadClass(PluginClassLoader.java:62)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
... 22 more
Additional informations :
Java version :
java version "1.8.0_151"
Java(TM) SE Runtime Environment (build 1.8.0_151-b12)
Java HotSpot(TM) 64-Bit Server VM (build 25.151-b12, mixed mode
Centos version :
centos-release-7-4.1708.el7.centos.x86_64
[Edit : 30/11/2017]
Editing plugin.path variables in every properties files didn't fix the problem.
List of files containing 'plugin.path' variable :
./etc/schema-registry/connect-avro-distributed.properties:84:plugin.path=/opt/confluent-4.0.0/share/java
./etc/schema-registry/connect-avro-standalone.properties:51:plugin.path=/opt/confluent-4.0.0/share/java
./etc/kafka/connect-distributed.properties:95:plugin.path=/opt/confluent-4.0.0/share/java
./etc/kafka/connect-standalone.properties:50:plugin.path=/opt/confluent-4.0.0/share/java
With Confluent 4.0.0, classloading isolation with plugin.path is enabled by default for Kafka Connect.
When you install Confluent Platform from deb or rpm packages the default location of your plugin.path is known beforehand.
However, when you download and extract the zip or tar.gz archive of Confluent Platform somewhere in your filesystem, it's set to:
plugin.path=share/java
This is a relative path, because when you download Confluent Platform as an archive (zip or tar.gz), the location where you extract the archive is not known (in your example above it's /opt/confluent-4.0.0/).
The CLI or Connect's bin scripts will be able to guess this location if you run it from the directory where you extracted Confluent platform:
For instance, in the example above:
cd /opt/confluent-4.0.0
./bin/confluent start
In order for you to be able to start Connect from any directory within your filesystem, given that the bin directory for Confluent Platform is in your PATH, you will need to set the property plugin.path to the absolute path location of your plugins:
To use Confluent CLI edit:
etc/schema-registry/connect-avro-distributed.properties
and set your plugin.path appropriately (here: plugin.path=/opt/confluent-4.0.0/share/java)
For the regular bin scripts edit:
./etc/kafka/connect-distributed.properties
and
./etc/kafka/connect-standalone.properties
and set your plugin.path as above (again, in your example: plugin.path=/opt/confluent-4.0.0/share/java).
Given that you used the tar installation (rather than the Docker image approach). Long story short, you need to be inside of the Confluent distribution, in my example, confluent-6.1.0
As shown in the screenshot, when you run the command confluent local services start in the root directory, Connect failed. And anything after that failure (e.g. ksqlDB, Control Center etc.) didn't even get a chance to start.
When you run the same command inside of confluent-6.1.0, everything worked out.