Not sure what is wrong. Trying to setup Snowflake kafka connect and it seems to be failing without throwing any useful logs
[2021-04-07 21:09:25,024] INFO Creating connector TEST_CONNECTOR of type com.snowflake.kafka.connector.SnowflakeSinkConnector (org.apache.kafka.connect.runtime.Worker:202)
[2021-04-07 21:09:25,028] INFO Instantiated connector TEST_CONNECTOR with version 1.5.0 of type class com.snowflake.kafka.connector.SnowflakeSinkConnector (org.apache.kafka.connect.runtime.Worker:205)
[2021-04-07 21:09:25,029] INFO
[SF_KAFKA_CONNECTOR] Snowflake Kafka Connector Version: 1.5.0 (com.snowflake.kafka.connector.Utils:99)
[2021-04-07 21:09:25,092] WARN
[SF_KAFKA_CONNECTOR] Connector update is available, please upgrade Snowflake Kafka Connector (1.5.0 -> 1.5.2) (com.snowflake.kafka.connector.Utils:136)
[2021-04-07 21:09:25,092] INFO
[SF_KAFKA_CONNECTOR] SnowflakeSinkConnector:start (com.snowflake.kafka.connector.SnowflakeSinkConnector:91)
[2021-04-07 21:09:25,330] INFO
[SF_KAFKA_CONNECTOR] initialized the snowflake connection (com.snowflake.kafka.connector.internal.SnowflakeConnectionServiceV1:38)
[2021-04-07 21:09:25,336] INFO Finished creating connector TEST_CONNECTOR (org.apache.kafka.connect.runtime.Worker:224)
[2021-04-07 21:09:25,337] INFO Skipping reconfiguration of connector sflksink since it is not running (org.apache.kafka.connect.runtime.standalone.StandaloneHerder:285)
[2021-04-07 21:09:25,338] ERROR Stopping after connector error (org.apache.kafka.connect.cli.ConnectStandalone:113)
java.lang.NullPointerException
at org.apache.kafka.connect.cli.ConnectStandalone$1.onCompletion(ConnectStandalone.java:104)
at org.apache.kafka.connect.cli.ConnectStandalone$1.onCompletion(ConnectStandalone.java:98)
at org.apache.kafka.connect.util.ConvertingFutureCallback.onCompletion(ConvertingFutureCallback.java:44)
at org.apache.kafka.connect.runtime.standalone.StandaloneHerder.putConnectorConfig(StandaloneHerder.java:185)
at org.apache.kafka.connect.cli.ConnectStandalone.main(ConnectStandalone.java:107)
[2021-04-07 21:09:25,340] INFO Kafka Connect stopping (org.apache.kafka.connect.runtime.Connect:65)
[2021-04-07 21:09:25,341] INFO Stopping REST server (org.apache.kafka.connect.runtime.rest.RestServer:211)
[2021-04-07 21:09:25,345] INFO Stopped http_8083#2cc0fa2a{HTTP/1.1}{0.0.0.0:8083} (org.eclipse.jetty.server.ServerConnector:306)
[2021-04-07 21:09:25,354] INFO Stopped o.e.j.s.ServletContextHandler#5c83ae01{/,null,UNAVAILABLE} (org.eclipse.jetty.server.handler.ContextHandler:865)
[2021-04-07 21:09:25,360] INFO REST server stopped (org.apache.kafka.connect.runtime.rest.RestServer:222)
[2021-04-07 21:09:25,360] INFO Herder stopping (org.apache.kafka.connect.runtime.standalone.StandaloneHerder:77)
[2021-04-07 21:09:25,360] INFO Stopping connector TEST_CONNECTOR (org.apache.kafka.connect.runtime.Worker:305)
[2021-04-07 21:09:25,361] INFO
[SF_KAFKA_CONNECTOR] SnowflakeSinkConnector:stop (com.snowflake.kafka.connector.SnowflakeSinkConnector:141)
[2021-04-07 21:09:25,362] INFO Stopped connector TEST_CONNECTOR (org.apache.kafka.connect.runtime.Worker:321)
[2021-04-07 21:09:25,362] INFO Worker stopping (org.apache.kafka.connect.runtime.Worker:151)
[2021-04-07 21:09:25,365] INFO Stopped FileOffsetBackingStore (org.apache.kafka.connect.storage.FileOffsetBackingStore:67)
[2021-04-07 21:09:25,365] INFO Worker stopped (org.apache.kafka.connect.runtime.Worker:172)
[2021-04-07 21:09:25,369] INFO Herder stopped (org.apache.kafka.connect.runtime.standalone.StandaloneHerder:87)
[2021-04-07 21:09:25,371] INFO Kafka Connect stopped (org.apache.kafka.connect.runtime.Connect:70)
The config file looks like below
name=sflksink
connector.class=com.snowflake.kafka.connector.SnowflakeSinkConnector
tasks.max=1
topics=snowflake-connect-test
buffer.count.records=10
buffer.flush.time=60
buffer.size.bytes=50
snowflake.url.name=url
snowflake.user.name=<user>
snowflake.database.name=<database>
snowflake.schema.name=<schema>
snowflake.private.key=<private_key>
snowflake.warehouse.name=MY_WAREHOUSE
key.converter=org.apache.kafka.connect.storage.StringConverter
value.converter=com.snowflake.kafka.connector.records.SnowflakeJsonConverter
Any pointers will be helpful
In the log message it says
Connector update is available, please upgrade Snowflake Kafka Connector (1.5.0 -> 1.5.2)
So I would suggest that you update your connector JAR to the latest version and try again.
Related
I am a beginner and I have to use Kafka for data transfer into/from Hadoop FS (or any other application, not just through put or copyFromLocal commands),kafka needs zookeeper as well, I enabled Zooekeeper audit logging but I still get errors.
For Kafka, when I want to start it:
JMX_PORT=8004 bin/kafka-server-start.sh config/server.properties
I get the error:
[2022-02-16 13:56:45,939] INFO shutting down (kafka.server.KafkaServer)
[2022-02-16 13:56:46,114] INFO App info kafka.server for 0 unregistered (org.apache.kafka.common.utils.AppInfoParser)
[2022-02-16 13:56:46,133] INFO shut down completed (kafka.server.KafkaServer)
[2022-02-16 13:56:46,133] ERROR Exiting Kafka. (kafka.Kafka$)
[2022-02-16 13:56:46,165] INFO shutting down (kafka.server.KafkaServer)
And when I want to start Zookeeper using the command:
bin/zookeeper-server-start.sh config/zookeeper.properties
I get the following (and it gets stuck on it):
[2022-02-16 14:03:13,954] INFO zookeeper.request_throttler.shutdownTimeout = 10000 (org.apache.zookeeper.server.RequestThrottler)
[2022-02-16 14:03:13,955] INFO PrepRequestProcessor (sid:0) started, reconfigEnabled=false (org.apache.zookeeper.server.PrepRequestProcessor)
[2022-02-16 14:03:14,136] INFO Using checkIntervalMs=60000 maxPerMinute=10000 maxNeverUsedIntervalMs=0 (org.apache.zookeeper.server.ContainerManager)
[2022-02-16 14:03:14,138] INFO ZooKeeper audit is disabled. (org.apache.zookeeper.audit.ZKAuditProvider)
Does anyone know how to work this out? I enabled audit logging but still. Same problem.
Zookeeper CLI isn't "stuck"; it's waiting for connections.
Open a new terminal and start Kafka
Alternatively, you could use Docker Compose / Kubernetes, if you think your host / local JVM is causing issues
I'm trying to do change data capture with Debezium using Postgres, Kafka, Kafka connect and debezium Postgres connector. Having an issue when trying to start Kafka Connect service with Debezium-Postgres-connector.
This is the plugin.path in my config/connect-standalone.properties:
plugin.path=/opt/kafka/kafka_2.13-3.1.0/connect/debezium-connector-postgres/
The connect-debezium-postgres.properties file:
name=first-connector
connector.class=io.debezium.connector.postgresql.PostgresConnector
database.hostname=postgres
database.port=5432
database.user=postgres
database.password=password
database.server.id=1
database.server.name=bankserver1
database.include.list=bank
table.inlcude.list=bank.holding
database.history.kafka.bootstrap.servers=localhost:9092
database.history.kafka.topic=dbhistory.test
include.schema.changes=true
tombstones.on.delete=false
The command when starting Kafka Connect service with Debezium-Postgres-Connector:
bin/connect-standalone.sh config/connect-standalone.properties config/connect-debezium-postgres.properties
The error Kafka Connect service:
[2022-02-22 10:41:50,571] ERROR Failed to create job for config/connect-debezium-postgres.properties (org.apache.kafka.connect.cli.ConnectStandalone:107)
[2022-02-22 10:41:50,581] ERROR Stopping after connector error (org.apache.kafka.connect.cli.ConnectStandalone:117)
java.util.concurrent.ExecutionException: org.apache.kafka.connect.runtime.rest.errors.BadRequestException: Connector configuration is invalid and contains the following 1 error(s):
A value is required
You can also find the above list of errors at the endpoint `/connector-plugins/{connectorType}/config/validate`
at org.apache.kafka.connect.util.ConvertingFutureCallback.result(ConvertingFutureCallback.java:115)
at org.apache.kafka.connect.util.ConvertingFutureCallback.get(ConvertingFutureCallback.java:99)
at org.apache.kafka.connect.cli.ConnectStandalone.main(ConnectStandalone.java:114)
Caused by: org.apache.kafka.connect.runtime.rest.errors.BadRequestException: Connector configuration is invalid and contains the following 1 error(s):
A value is required
You can also find the above list of errors at the endpoint `/connector-plugins/{connectorType}/config/validate`
at org.apache.kafka.connect.runtime.AbstractHerder.maybeAddConfigErrors(AbstractHerder.java:691)
at org.apache.kafka.connect.runtime.standalone.StandaloneHerder.putConnectorConfig(StandaloneHerder.java:207)
at org.apache.kafka.connect.runtime.standalone.StandaloneHerder.lambda$null$0(StandaloneHerder.java:193)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:750)
[2022-02-22 10:41:50,597] INFO Kafka Connect stopping (org.apache.kafka.connect.runtime.Connect:67)
[2022-02-22 10:41:50,597] INFO Stopping REST server (org.apache.kafka.connect.runtime.rest.RestServer:311)
[2022-02-22 10:41:50,630] INFO Stopped http_8083#26be6ca7{HTTP/1.1, (http/1.1)}{0.0.0.0:8083} (org.eclipse.jetty.server.AbstractConnector:381)
[2022-02-22 10:41:50,631] INFO node0 Stopped scavenging (org.eclipse.jetty.server.session:149)
[2022-02-22 10:41:50,646] INFO REST server stopped (org.apache.kafka.connect.runtime.rest.RestServer:328)
[2022-02-22 10:41:50,646] INFO Herder stopping (org.apache.kafka.connect.runtime.standalone.StandaloneHerder:106)
[2022-02-22 10:41:50,649] INFO Worker stopping (org.apache.kafka.connect.runtime.Worker:199)
[2022-02-22 10:41:50,650] INFO Stopped FileOffsetBackingStore (org.apache.kafka.connect.storage.FileOffsetBackingStore:66)
[2022-02-22 10:41:50,650] INFO Metrics scheduler closed (org.apache.kafka.common.metrics.Metrics:659)
[2022-02-22 10:41:50,651] INFO Closing reporter org.apache.kafka.common.metrics.JmxReporter (org.apache.kafka.common.metrics.Metrics:663)
[2022-02-22 10:41:50,651] INFO Metrics reporters closed (org.apache.kafka.common.metrics.Metrics:669)
[2022-02-22 10:41:50,652] INFO App info kafka.connect for 10.0.2.15:8083 unregistered (org.apache.kafka.common.utils.AppInfoParser:83)
[2022-02-22 10:41:50,652] INFO Worker stopped (org.apache.kafka.connect.runtime.Worker:220)
[2022-02-22 10:41:50,663] INFO Herder stopped (org.apache.kafka.connect.runtime.standalone.StandaloneHerder:124)
[2022-02-22 10:41:50,664] INFO Kafka Connect stopped (org.apache.kafka.connect.runtime.Connect:7
the content of the folders that put in the plugin path
Status
My HDFS was installed via ambari, HDP.
I'm Currently trying to load kafka topics into HDFS sink. Kafka and HDFS was installed in the same machine x.x.x.x.
I didn't change much stuff from the default settings, except some port that according to my needs.
Here is how i execute kafka:
/usr/hdp/3.1.4.0-315/kafka/bin/connect-standalone.sh /etc/kafka/connect-standalone.properties /etc/kafka-connect-hdfs/quickstart-hdfs.properties
Inside connect-standalone.properties
bootstrap.servers=x.x.x.x:6667
key.converter=org.apache.kafka.connect.storage.StringConverter
value.converter=org.apache.kafka.connect.storage.StringConverter
key.converter.schemas.enable=true
value.converter.schemas.enable=true
offset.storage.file.filename=/tmp/connect.offsets
offset.flush.interval.ms=10000
inside quickstart-hdfs.properties
name=hdfs-sink
#connector.class=io.confluent.connect.hdfs.HdfsSinkConnector
connector.class=io.confluent.connect.hdfs3.Hdfs3SinkConnector
tasks.max=1
topics=test12
hdfs.url=hdfs://x.x.x.x:9000
flush.size=3
Here are the results i get when excute it:
[2020-06-23 03:26:00,918] INFO Started o.e.j.s.ServletContextHandler#71d9cb05{/,null,AVAILABLE} (org.eclipse.jetty.server.handler.ContextHandler:855)
[2020-06-23 03:26:00,928] INFO Started http_8083#329a1243{HTTP/1.1,[http/1.1]}{0.0.0.0:8083} (org.eclipse.jetty.server.AbstractConnector:292)
[2020-06-23 03:26:00,928] INFO Started #10495ms (org.eclipse.jetty.server.Server:410)
[2020-06-23 03:26:00,928] INFO Advertised URI: http://x.x.x.x:8083/ (org.apache.kafka.connect.runtime.rest.RestServer:267)
[2020-06-23 03:26:00,928] INFO REST server listening at http://x.x.x.x:8083/, advertising URL http://x.x.x.x:8083/ (org.apache.kafka.connect.runtime.rest.RestServer:217)
[2020-06-23 03:26:00,928] INFO Kafka Connect started (org.apache.kafka.connect.runtime.Connect:55)
[2020-06-23 03:26:00,959] ERROR Failed to create job for quickstart-hdfs.properties (org.apache.kafka.connect.cli.ConnectStandalone:102)
[2020-06-23 03:26:00,960] ERROR Stopping after connector error (org.apache.kafka.connect.cli.ConnectStandalone:113)
java.util.concurrent.ExecutionException: org.apache.kafka.connect.runtime.rest.errors.BadRequestException: Connector configuration is invalid and contains the following 1 error(s):
Missing required configuration "confluent.topic.bootstrap.servers" which has no default value.
You can also find the above list of errors at the endpoint `/{connectorType}/config/validate`
at org.apache.kafka.connect.util.ConvertingFutureCallback.result(ConvertingFutureCallback.java:79)
at org.apache.kafka.connect.util.ConvertingFutureCallback.get(ConvertingFutureCallback.java:66)
at org.apache.kafka.connect.cli.ConnectStandalone.main(ConnectStandalone.java:110)
Caused by: org.apache.kafka.connect.runtime.rest.errors.BadRequestException: Connector configuration is invalid and contains the following 1 error(s):
Missing required configuration "confluent.topic.bootstrap.servers" which has no default value.
You can also find the above list of errors at the endpoint `/{connectorType}/config/validate`
at org.apache.kafka.connect.runtime.AbstractHerder.maybeAddConfigErrors(AbstractHerder.java:415)
at org.apache.kafka.connect.runtime.standalone.StandaloneHerder.putConnectorConfig(StandaloneHerder.java:189)
at org.apache.kafka.connect.cli.ConnectStandalone.main(ConnectStandalone.java:107)
[2020-06-23 03:26:00,961] INFO Kafka Connect stopping (org.apache.kafka.connect.runtime.Connect:65)
[2020-06-23 03:26:00,961] INFO Stopping REST server (org.apache.kafka.connect.runtime.rest.RestServer:223)
[2020-06-23 03:26:00,964] INFO Stopped http_8083#329a1243{HTTP/1.1,[http/1.1]}{0.0.0.0:8083} (org.eclipse.jetty.server.AbstractConnector:341)
[2020-06-23 03:26:00,965] INFO node0 Stopped scavenging (org.eclipse.jetty.server.session:167)
[2020-06-23 03:26:00,972] INFO Stopped o.e.j.s.ServletContextHandler#71d9cb05{/,null,UNAVAILABLE} (org.eclipse.jetty.server.handler.ContextHandler:1045)
[2020-06-23 03:26:00,974] INFO REST server stopped (org.apache.kafka.connect.runtime.rest.RestServer:241)
[2020-06-23 03:26:00,974] INFO Herder stopping (org.apache.kafka.connect.runtime.standalone.StandaloneHerder:95)
[2020-06-23 03:26:00,974] INFO Worker stopping (org.apache.kafka.connect.runtime.Worker:184)
[2020-06-23 03:26:00,974] INFO Stopped FileOffsetBackingStore (org.apache.kafka.connect.storage.FileOffsetBackingStore:67)
[2020-06-23 03:26:00,975] INFO Worker stopped (org.apache.kafka.connect.runtime.Worker:205)
[2020-06-23 03:26:00,975] INFO Herder stopped (org.apache.kafka.connect.runtime.standalone.StandaloneHerder:112)
[2020-06-23 03:26:00,975] INFO Kafka Connect stopped (org.apache.kafka.connect.runtime.Connect:70)
I'm really new in kafka and hdfs envinronment. Any suggestion and help will be appreciated so much. Thank you
edit:
i've add my connect-standalone.properties into
bootstrap.servers=x.x.x.x:6667
confluent.license=
confluent.topic.bootstrap.server=x.x.x.x:6667
key.converter=org.apache.kafka.connect.storage.StringConverter
value.converter=org.apache.kafka.connect.storage.StringConverter
key.converter.schemas.enable=true
value.converter.schemas.enable=true
offset.storage.file.filename=/tmp/connect.offsets
offset.flush.interval.ms=10000
nothing changes it still showing the same log error
EDIT SOLVING (Thanks to Robin)
quickstart-hdfs.properties
name=hdfs-sink
connector.class=io.confluent.connect.hdfs3.Hdfs3SinkConnector
tasks.max=1
topics=test12
hdfs.url=hdfs://x.x.x.x:8020
flush.size=3
confluent.license=
confluent.topic.bootstrap.servers=x.x.x.x:6667
connect-standalone.properties
bootstrap.servers=x.x.x.x:6667
key.converter=org.apache.kafka.connect.json.JsonConverter
value.converter=org.apache.kafka.connect.json.JsonConverter
key.converter.schemas.enable=true
value.converter.schemas.enable=true
offset.storage.file.filename=/tmp/connect.offsets
offset.flush.interval.ms=10000
plugin.path=/usr/local/share/java,/usr/local/share/kafka/plugins,/opt/connectors,
plugin.path=/usr/share/java,/usr/share/confluent-hub-components
Here's the error:
Missing required configuration "confluent.topic.bootstrap.servers" which has no default value.
The problem is that you've taken the config for the HDFS Sink connector, and changed the connector for a different one (HDFS 3 Sink), and this one has different configuration requirements.
You can follow the quickstart for the HDFS 3 sink connector, or fix your existing configuration by adding
confluent.topic.bootstrap.servers=10.64.2.236:6667
confluent.topic.replication.factor=1
Note: in your example you missed the s from confluent.topic.bootstrap.servers which is why it didn't work
Apache Kafka is failing to start. It shows in its logs "Failed to get cluster id from Zookeeper. This can happen if /cluster/id is deleted from Zookeeper."
How can I check the "/cluster/id"?
previously, when kafka is failing to start it is because I updated java on my server. So, I need to re-direct $JAVA_HOME to the new path & restart kafka and it will work again just fine. But, this case is different, I checked $JAVA_HOME & it's correct. So, I wanted to know more details of this issue. I read one of the logs file in /opt/kafka/logs and I got this log:
[2019-08-21 10:18:40,472] ERROR Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)
org.apache.kafka.common.KafkaException: Failed to get cluster id from Zookeeper. This can happen if /cluster/id is deleted from Zookeeper.
at kafka.zk.KafkaZkClient.$anonfun$createOrGetClusterId$1(KafkaZkClient.scala:1498)
at scala.Option.getOrElse(Option.scala:138)
at kafka.zk.KafkaZkClient.createOrGetClusterId(KafkaZkClient.scala:1498)
at kafka.server.KafkaServer.$anonfun$getOrGenerateClusterId$1(KafkaServer.scala:390)
at scala.Option.getOrElse(Option.scala:138)
at kafka.server.KafkaServer.getOrGenerateClusterId(KafkaServer.scala:390)
at kafka.server.KafkaServer.startup(KafkaServer.scala:208)
at kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:38)
at kafka.Kafka$.main(Kafka.scala:75)
at kafka.Kafka.main(Kafka.scala)
[2019-08-21 10:18:40,476] INFO shutting down (kafka.server.KafkaServer)
[2019-08-21 10:18:40,488] INFO [ZooKeeperClient] Closing. (kafka.zookeeper.ZooKeeperClient)
[2019-08-21 10:18:40,500] INFO Session: 0x100494a8714000a closed (org.apache.zookeeper.ZooKeeper)
[2019-08-21 10:18:40,505] INFO EventThread shut down for session: 0x100494a8714000a (org.apache.zookeeper.ClientCnxn)
[2019-08-21 10:18:40,507] INFO [ZooKeeperClient] Closed. (kafka.zookeeper.ZooKeeperClient)
[2019-08-21 10:18:40,517] INFO shut down completed (kafka.server.KafkaServer)
[2019-08-21 10:18:40,517] ERROR Exiting Kafka. (kafka.server.KafkaServerStartable)
[2019-08-21 10:18:40,554] INFO shutting down (kafka.server.KafkaServer)
when I read the previous Error
Failed to get cluster id from Zookeeper. This can happen if /cluster/id is deleted from Zookeeper.
I though that "zookeeper id" in this path was deleted by mistake /tmp/zookeeper/myid but the file is still there with the corresponding server number written in it & zk is working fine. my kafka version is 2.2.0 .
I searched online for this ERROR "Failed to get cluster id from Zookeeper. This can happen if /cluster/id is deleted from Zookeeper."
but honestly, I did not find any thing helpful.
I am trying to write a kafka connector to move data that is in kafka topic into mongodb(sink). For i have added required configurations in connect-json-standalone.properties file and also in connect-mongo-sink.properties file in kafka folder. In this process while starting the connector I am getting below exception
[2019-07-23 18:07:17,274] INFO Started o.e.j.s.ServletContextHandler#76e3b45b{/,null,AVAILABLE} (org.eclipse.jetty.server.handler.ContextHandler:855)
[2019-07-23 18:07:17,274] INFO REST resources initialized; server is started and ready to handle requests (org.apache.kafka.connect.runtime.rest.RestServer:231)
[2019-07-23 18:07:17,274] INFO Kafka Connect started (org.apache.kafka.connect.runtime.Connect:56)
[2019-07-23 18:07:17,635] INFO Cluster created with settings {hosts=[localhost:27017], mode=MULTIPLE, requiredClusterType=UNKNOWN, serverSelectionTimeout='30000 ms', maxWaitQueueSize=500} (org.mongodb.driver.cluster:71)
[2019-07-23 18:07:17,636] INFO Adding discovered server localhost:27017 to client view of cluster (org.mongodb.driver.cluster:71)
[2019-07-23 18:07:17,760] INFO Closing all connections to repracli/localhost:27017 (io.debezium.connector.mongodb.ConnectionContext:86)
[2019-07-23 18:07:17,768] ERROR Failed to create job for ./etc/kafka/connect-mongodb-sink.properties (org.apache.kafka.connect.cli.ConnectStandalone:104)
[2019-07-23 18:07:17,769] ERROR Stopping after connector error (org.apache.kafka.connect.cli.ConnectStandalone:115)
java.util.concurrent.ExecutionException: org.apache.kafka.connect.runtime.rest.errors.BadRequestException: Connector configuration is invalid and contains the following 1 error(s):
A value is required
You can also find the above list of errors at the endpoint `/{connectorType}/config/validate`
at org.apache.kafka.connect.util.ConvertingFutureCallback.result(ConvertingFutureCallback.java:79)
at org.apache.kafka.connect.util.ConvertingFutureCallback.get(ConvertingFutureCallback.java:66)
at org.apache.kafka.connect.cli.ConnectStandalone.main(ConnectStandalone.java:112)
Caused by: org.apache.kafka.connect.runtime.rest.errors.BadRequestException: Connector configuration is invalid and contains the following 1 error(s):
A value is required
You can also find the above list of errors at the endpoint `/{connectorType}/config/validate`
at org.apache.kafka.connect.runtime.AbstractHerder.maybeAddConfigErrors(AbstractHerder.java:423)
at org.apache.kafka.connect.runtime.standalone.StandaloneHerder.putConnectorConfig(StandaloneHerder.java:188)
at org.apache.kafka.connect.cli.ConnectStandalone.main(ConnectStandalone.java:109)
[2019-07-23 18:07:17,782] INFO Kafka Connect stopping (org.apache.kafka.connect.runtime.Connect:66)
[2019-07-23 18:07:17,782] INFO Stopping REST server (org.apache.kafka.connect.runtime.rest.RestServer:239)
[2019-07-23 18:07:17,790] INFO Stopped http_localhost8084#5f96f6a2{HTTP/1.1,[http/1.1]}{localhost:8084} (org.eclipse.jetty.server.AbstractConnector:341)
[2019-07-23 18:07:17,790] INFO node0 Stopped scavenging (org.eclipse.jetty.server.session:167)
[2019-07-23 18:07:17,792] INFO REST server stopped (org.apache.kafka.connect.runtime.rest.RestServer:256)
[2019-07-23 18:07:17,793] INFO Herder stopping (org.apache.kafka.connect.runtime.standalone.StandaloneHerder:94)
[2019-07-23 18:07:17,793] INFO Worker stopping (org.apache.kafka.connect.runtime.Worker:185)
[2019-07-23 18:07:17,794] INFO Stopped FileOffsetBackingStore (org.apache.kafka.connect.storage.FileOffsetBackingStore:66)
[2019-07-23 18:07:17,796] INFO Worker stopped (org.apache.kafka.connect.runtime.Worker:206)
[2019-07-23 18:07:17,799] INFO Herder stopped (org.apache.kafka.connect.runtime.standalone.StandaloneHerder:111)
[2019-07-23 18:07:17,800] INFO Kafka Connect stopped (org.apache.kafka.connect.runtime.Connect:71)
I have tried to solve it by changing connection.uri in connect-mongo-sink.properties in several ways which didn't worked out well. I have also googled some links that also didn't solved my problem.
referal_link : https://groups.google.com/forum/#!topic/debezium/bC4TUld5NGw
https://github.com/confluentinc/kafka-connect-jdbc/issues/334
connect-json-standalone.properties:
key.converter=org.apache.kafka.connect.json.JsonConverter
key.converter.schema.registry.url=http://localhost:8081
value.converter=org.apache.kafka.connect.json.JsonConverter
value.converter.schema.registry.url=http://localhost:8081
internal.key.converter=org.apache.kafka.connect.json.JsonConverter
internal.value.converter=org.apache.kafka.connect.json.JsonConverter
internal.key.converter.schemas.enable=false
internal.value.converter.schemas.enable=false
connect-mongo-sink.properties:
name=mongodb-sink-connector
connector.class=io.debezium.connector.mongodb.MongoDbConnector
tasks.max=1
topics=sample-consumerr-sink-topic
type.name=kafka-connect
mongodb.hosts=repracli/localhost:27017
mongodb.collection=conn_mongo_sink_collc
mongodb.connection.uri=mongodb://localhost:27017/conn_mongo_sink_db?w=1&journal=true
I want the sink connector to work inorder to consume topic data into mongodb collection name "conn_mongo_sink_collc". Can anyone help me how to resolve that error ?
Note : I am using 3 replicaSet mongodb in which port-27017 is primary , 27018-secondary, 27019-secondary.
io.debezium.connector.mongodb.MongoDbConnector is a Source connector, for getting data from MongoDB into Kafka.
To stream data from MongoDB into Kafka use a Sink connector. MongoDB recently launched their own sink connector and blogged about its use, including sample configuration.
Faced same issue. This can also happen when connector config is not complete or invalid.
check whether the necessary properties are set by looking debezium documentation