Kafka JDBC Sink Connector: no tasks assigned - apache-kafka
I try to start JDBC sink connector with following configuration:
{
"name": "crm_data-sink_hh",
"config": {
"connector.class": "io.confluent.connect.jdbc.JdbcSinkConnector",
"tasks.max": 6,
"topics": "crm_account,crm_competitor,crm_event,crm_event_participation",
"connection.url": "jdbc:postgresql://db_host/hh?prepareThreshold=0",
"connection.user": "db_user",
"connection.password": "${file:db_hh_kafka_connect_pass}",
"dialect.name": "PostgreSqlDatabaseDialect",
"insert.mode": "upsert",
"pk.mode": "record_value",
"pk.fields": "guid",
"errors.tolerance": "all",
"errors.log.enable":true,
"errors.log.include.messages":true,
"errors.deadletterqueue.topic.name":"crm_data_deadletterqueue",
"errors.deadletterqueue.context.headers.enable":true
}
}
But no tasks are running while connector is in running state:
curl -X GET http://kafka-connect:10900/connectors/crm_data-sink_hh/status
{"name":"crm_data-sink_hh","connector":{"state":"RUNNING","worker_id":"172.16.24.14:10900"},"tasks":[],"type":"sink"}
I faced this issue many times, but I'm very confused because it happens randomly. My question is very similar with this question. I would appreciate any help!
Update. 11/04/2019 (unfortunately, now I have only INFO level log)
Finally, after few attempts I started connector with running tasks by updating config of the existing connector crm_data-sink_db_hh:
$ curl -X GET http://docker61:10900/connectors/crm_data-sink_db_hh/status
{"name":"crm_data-sink_db_hh","connector":{"state":"RUNNING","worker_id":"192.168.1.198:10900"},"tasks":[],"type":"sink"}
$ curl -X GET http://docker61:10900/connectors/crm_data-sink_db_hh/status
{"name":"crm_data-sink_db_hh","connector":{"state":"RUNNING","worker_id":"192.168.1.198:10900"},"tasks":[],"type":"sink"}
$ curl -X PUT -d #new_config.json http://docker21:10900/connectors/crm_data-sink_db_hh/config -H 'Content-Type: application/json'
$ curl -X GET http://docker61:10900/connectors/crm_data-sink_db_hh/status
{"name":"crm_data-sink_db_hh","connector":{"state":"UNASSIGNED","worker_id":"192.168.1.198:10900"},"tasks":[],"type":"sink"}
$ curl -X GET http://docker61:10900/connectors/crm_data-sink_db_hh/status
{"name":"crm_data-sink_db_hh","connector":{"state":"RUNNING","worker_id":"172.16.36.11:10900"},"tasks":[{"state":"UNASSIGNED","id":0,"worker_id":"172.16.32.11:10900"},{"state":"UNASSIGNED","id":1,"worker_id":"172.16.32.11:10900"},{"state":"RUNNING","id":2,"worker_id":"192.168.2.243:10900"},{"state":"UNASSIGNED","id":3,"worker_id":"172.16.32.11:10900"},{"state":"UNASSIGNED","id":4,"worker_id":"172.16.32.11:10900"}],"type":"sink"}
$ curl -X GET http://docker61:10900/connectors/crm_data-sink_db_hh/status
{"name":"crm_data-sink_db_hh","connector":{"state":"RUNNING","worker_id":"192.168.1.198:10900"},"tasks":[{"state":"RUNNING","id":0,"worker_id":"192.168.1.198:10900"},{"state":"RUNNING","id":1,"worker_id":"192.168.1.198:10900"},{"state":"RUNNING","id":2,"worker_id":"192.168.1.198:10900"},{"state":"RUNNING","id":3,"worker_id":"192.168.1.198:10900"},{"state":"RUNNING","id":4,"worker_id":"192.168.1.198:10900"},{"state":"RUNNING","id":5,"worker_id":"192.168.1.198:10900"}],"type":"sink"}
Log:
[2019-04-11 16:02:15,167] INFO Connector crm_data-sink_db_hh config updated (org.apache.kafka.connect.runtime.distributed.DistributedHerder)
[2019-04-11 16:02:15,668] INFO Rebalance started (org.apache.kafka.connect.runtime.distributed.DistributedHerder)
[2019-04-11 16:02:15,668] INFO Stopping connector crm_data-source (org.apache.kafka.connect.runtime.Worker)
[2019-04-11 16:02:15,668] INFO Stopping task crm_data-source-0 (org.apache.kafka.connect.runtime.Worker)
[2019-04-11 16:02:15,668] INFO Stopping connector crm_data-sink_pandora (org.apache.kafka.connect.runtime.Worker)
[2019-04-11 16:02:15,668] INFO Stopping JDBC source task (io.confluent.connect.jdbc.source.JdbcSourceTask)
[2019-04-11 16:02:15,668] INFO Stopping table monitoring thread (io.confluent.connect.jdbc.JdbcSourceConnector)
...
Stopping connectors and tasks
...
[2019-04-11 16:02:17,373] INFO 192.168.1.91 - - [11/Apr/2019:13:02:14 +0000] "POST /connectors HTTP/1.1" 201 768 2468 (org.apache.kafka.connect.runtime.rest.RestServer)
[2019-04-11 16:02:20,668] ERROR Graceful stop of task crm_data-source-1 failed. (org.apache.kafka.connect.runtime.Worker)
[2019-04-11 16:02:20,669] ERROR Graceful stop of task crm_data-source-0 failed. (org.apache.kafka.connect.runtime.Worker)
[2019-04-11 16:02:20,669] ERROR Graceful stop of task crm_data-source-3 failed. (org.apache.kafka.connect.runtime.Worker)
[2019-04-11 16:02:20,669] ERROR Graceful stop of task crm_data-source-2 failed. (org.apache.kafka.connect.runtime.Worker)
[2019-04-11 16:02:20,669] INFO Finished stopping tasks in preparation for rebalance (org.apache.kafka.connect.runtime.distributed.DistributedHerder)
[2019-04-11 16:02:20,669] INFO [Worker clientId=connect-1, groupId=21] (Re-)joining group (org.apache.kafka.clients.consumer.internals.AbstractCoordinator)
[2019-04-11 16:02:20,681] INFO Tasks [crm_data-sink_hhru-0, crm_data-sink_hhru-3, crm_data-sink_hhru-4, crm_data-sink_hhru-1, crm_data-sink_hhru-2, crm_data-sink_hhru-5, crm_data-pandora_sink-0, crm_data-pandora_sink-2, crm_data-pandora_sink-1, crm_data-pandora_sink-4, crm_data-pandora_sink-3, crm_data-pandora_sink-03-0, crm_data-pandora_sink-03-2, crm_data-pandora_sink-03-1, crm_data-pandora_sink-00-1, crm_data-pandora_sink-00-0, crm_data-pandora_sink-00-3, crm_data-pandora_sink-00-2, crcrm_data-pandora_sink-00-4, crm_data-sink_hh-00-0, crm_data-sink_hh-00-1, crm_data-sink_hh-00-2, crm_data-pandora_sink-test-3, crm_data-pandora_sink-test-2, crm_data-pandora_sink-test-4,crm_data-pandora_sink-01-2, crm_data-pandora_sink-01-1, crm_data-pandora_sink-01-0, crm_data-source-3, crm_data-source-2, crm_data-source-1, crm_data-source-0, crm_data-sink_db_hh-0, crm_data-sink_db_hh-1, crm_data-sink_db_hh-2, crm_data-sink_hh-01-0, crm_data-sink_hh-01-1, crm_data-sink_hh-01-2, crm_data-sink_hh-01-3, crm_data-sink_hh-00-3, crm_data-sink_hh-00-4, crm_data-sink_hh-00-5, crm_data-sink_hh-1, crm_data-sink_hh-0, crm_data-sink_hh-3, crm_data-sink_hh-2, crm_data-sink_hh-5, crm_data-sink_hh-4, crm_data-sink_pandora-5, crm_data-sink_pandora-0, crm_data-sink_pandora-1, crm_data-sink_pandora-2, crm_data-sink_pandora-3, crm_data_account_on_competitors-source-0, crm_data-sink_pandora-4] configs updated (org.apache.kafka.connect.runtime.distributed.DistributedHerder)
[2019-04-11 16:02:20,681] INFO Tasks [] configs updated (org.apache.kafka.connect.runtime.distributed.DistributedHerder)
[2019-04-11 16:02:20,682] INFO Tasks [] configs updated (org.apache.kafka.connect.runtime.distributed.DistributedHerder)
[2019-04-11 16:02:20,683] INFO Tasks [] configs updated (org.apache.kafka.connect.runtime.distributed.DistributedHerder)
[2019-04-11 16:02:20,684] INFO Tasks [] configs updated (org.apache.kafka.connect.runtime.distributed.DistributedHerder)
[2019-04-11 16:02:20,685] INFO [Worker clientId=connect-1, groupId=21] Successfully joined group with generation 2206465 (org.apache.kafka.clients.consumer.internals.AbstractCoordinator)
[2019-04-11 16:02:20,685] INFO Joined group and got assignment: Assignment{error=0, leader='connect-1-57140c1d-3b19-4fc0-b4ca-e6ce272e1924', leaderUrl='http://192.168.1.198:10900/', offset=1168, connectorIds=[crm_data-sink_db_hh, crm_data-source, crm_data-sink_pandora], taskIds=[crm_data-source-0, crm_data-source-1, crm_data-source-2, crm_data-source-3, crm_data-sink_pandora-0, crm_data-sink_pandora-1, crm_data-sink_pandora-2, crm_data-sink_pandora-3, crm_data-sink_pandora-4, crm_data-sink_pandora-5]} (org.apache.kafka.connect.runtime.distributed.DistributedHerder)
[2019-04-11 16:02:20,685] INFO Starting connectors and tasks using config offset 1168 (org.apache.kafka.connect.runtime.distributed.DistributedHerder)
[2019-04-11 16:02:20,685] INFO Starting connector crm_data-sink_db_hh (org.apache.kafka.connect.runtime.distributed.DistributedHerder)
[2019-04-11 16:02:20,685] INFO Starting connector crm_data-source (org.apache.kafka.connect.runtime.distributed.DistributedHerder)
[2019-04-11 16:02:20,685] INFO Starting connector crm_data-sink_pandora (org.apache.kafka.connect.runtime.distributed.DistributedHerder)
[2019-04-11 16:02:20,685] INFO Starting task crm_data-source-0 (org.apache.kafka.connect.runtime.distributed.DistributedHerder)
...
Starting connectors and tasks
...
Update. 12/04/2019
I increased log level and reproduced the issue. I see a lot of records for different tasks (tasks of already deleted connectors or not yet running tasks) like this:
[2019-04-12 15:14:32,360] DEBUG Storing new config for task crm_data-sink_hh-3 this will wait for a commit message before the new config will take effect. New config: {...} (org.apache.kafka.connect.storage.KafkaConfigBackingStore)
There are tasks of deleted connectors in the task list - is it ok? The same situation in the internal topics of Kafka Connect.
My main question: why the connector did not fail if there are no tasks are running for any reason? Since the connector does not work actually in this situation.
It looks like a bug of Kafka Connect itself. There is a Kafka Jira ticket about this issue.
Related
Running Kafka Connect in distributed mode, no obvious errors, but data does not end up in sink connector
Running Kafka Connect via this repo https://github.com/entechlog/kafka-examples/tree/master/kafka-connect-standalone except I have added extra configs for AWS MSK IAM authentication. I've also updated the .env file to use different variables, like the AWS MSK IAM jar file, AWS key/secret key credentials, and a few other minor things. Note that this repo runs in standalone mode, but I have updated the launch shell script to run in distributed mode: exec connect-distributed /etc/"${COMPONENT}"/"${COMPONENT}".properties. But I have NOT created a file called kafka-connect.properties.template because when I do, I get a whole host of errors like Missing required configuration "group.id" which has no default value. which makes no sense to me as I can see it in the docker-compose.yml file. My goal is to get data from a third party Kafka cluster into BigQuery. When I run my docker-compose file, I see no errors, a few warnings, but nothing that stands out to me. I get a lot of warnings like this: [2021-12-14 01:57:37,917] WARN The configuration 'camelcase.default.dataset' was supplied but isn't a known config. (org.apache.kafka.clients.consumer.ConsumerConfig:380), and this dataset is required to send data to BigQuery. Not making sense why things like dataset are not being used. Here are the latest logs: [2021-12-14 01:57:37,921] INFO Kafka version: 6.2.1-ce (org.apache.kafka.common.utils.AppInfoParser:119) [2021-12-14 01:57:37,922] INFO Kafka commitId: 14770bfc4e973178 (org.apache.kafka.common.utils.AppInfoParser:120) [2021-12-14 01:57:37,922] INFO Kafka startTimeMs: 1639447057921 (org.apache.kafka.common.utils.AppInfoParser:121) [2021-12-14 01:57:39,332] INFO [Producer clientId=producer-3] Cluster ID: k2eIXxm_RkmWu2-R2d0N1Q (org.apache.kafka.clients.Metadata:279) [2021-12-14 01:57:39,413] INFO [Consumer clientId=consumer-connect-kafka-connect-group-3, groupId=connect-kafka-connect-group] Cluster ID: k2eIXxm_RkmWu2-R2d0N1Q (org.apache.kafka.clients.Metadata:279) [2021-12-14 01:57:39,428] INFO [Consumer clientId=consumer-connect-kafka-connect-group-3, groupId=connect-kafka-connect-group] Subscribed to partition(s): connect-configs-0 (org.apache.kafka.clients.consumer.KafkaConsumer:1123) [2021-12-14 01:57:39,428] INFO [Consumer clientId=consumer-connect-kafka-connect-group-3, groupId=connect-kafka-connect-group] Seeking to EARLIEST offset of partition connect-configs-0 (org.apache.kafka.clients.consumer.internals.SubscriptionState:619) [2021-12-14 01:57:40,727] INFO Finished reading KafkaBasedLog for topic connect-configs (org.apache.kafka.connect.util.KafkaBasedLog:228) [2021-12-14 01:57:40,728] INFO Started KafkaBasedLog for topic connect-configs (org.apache.kafka.connect.util.KafkaBasedLog:230) [2021-12-14 01:57:40,729] INFO Started KafkaConfigBackingStore (org.apache.kafka.connect.storage.KafkaConfigBackingStore:290) [2021-12-14 01:57:40,729] INFO [Worker clientId=connect-1, groupId=connect-kafka-connect-group] Herder started (org.apache.kafka.connect.runtime.distributed.DistributedHerder:312) [2021-12-14 01:57:45,059] INFO [Worker clientId=connect-1, groupId=connect-kafka-connect-group] Cluster ID: k2eIXxm_RkmWu2-R2d0N1Q (org.apache.kafka.clients.Metadata:279) [2021-12-14 01:57:45,089] INFO [Worker clientId=connect-1, groupId=connect-kafka-connect-group] Discovered group coordinator <bootstrap server and port here> (id: 2147483643 rack: null) (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:848) [2021-12-14 01:57:45,095] INFO [Worker clientId=connect-1, groupId=connect-kafka-connect-group] Rebalance started (org.apache.kafka.connect.runtime.distributed.WorkerCoordinator:221) [2021-12-14 01:57:45,095] INFO [Worker clientId=connect-1, groupId=connect-kafka-connect-group] (Re-)joining group (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:538) [2021-12-14 01:57:45,957] INFO [Worker clientId=connect-1, groupId=connect-kafka-connect-group] (Re-)joining group (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:538) [2021-12-14 01:57:49,038] INFO [Worker clientId=connect-1, groupId=connect-kafka-connect-group] Successfully joined group with generation Generation{generationId=1, memberId='connect-1-a4cc5355-60da-46c3-8228-bff2de664f2c', protocol='sessioned'} (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:594) [2021-12-14 01:57:49,160] INFO [Worker clientId=connect-1, groupId=connect-kafka-connect-group] Successfully synced group in generation Generation{generationId=1, memberId='connect-1-a4cc5355-60da-46c3-8228-bff2de664f2c', protocol='sessioned'} (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:758) [2021-12-14 01:57:49,161] INFO [Worker clientId=connect-1, groupId=connect-kafka-connect-group] Joined group at generation 1 with protocol version 2 and got assignment: Assignment{error=0, leader='connect-1-a4cc5355-60da-46c3-8228-bff2de664f2c', leaderUrl='http://kafka-connect:8083/', offset=10, connectorIds=[], taskIds=[], revokedConnectorIds=[], revokedTaskIds=[], delay=0} with rebalance delay: 0 (org.apache.kafka.connect.runtime.distributed.DistributedHerder:1694) [2021-12-14 01:57:49,162] WARN [Worker clientId=connect-1, groupId=connect-kafka-connect-group] Catching up to assignment's config offset. (org.apache.kafka.connect.runtime.distributed.DistributedHerder:1119) [2021-12-14 01:57:49,162] INFO [Worker clientId=connect-1, groupId=connect-kafka-connect-group] Current config state offset -1 is behind group assignment 10, reading to end of config log (org.apache.kafka.connect.runtime.distributed.DistributedHerder:1183) [2021-12-14 01:57:49,359] INFO [Worker clientId=connect-1, groupId=connect-kafka-connect-group] Finished reading to end of log and updated config snapshot, new config log offset: 10 (org.apache.kafka.connect.runtime.distributed.DistributedHerder:1190) [2021-12-14 01:57:49,359] INFO [Worker clientId=connect-1, groupId=connect-kafka-connect-group] Starting connectors and tasks using config offset 10 (org.apache.kafka.connect.runtime.distributed.DistributedHerder:1244) [2021-12-14 01:57:49,359] INFO [Worker clientId=connect-1, groupId=connect-kafka-connect-group] Finished starting connectors and tasks (org.apache.kafka.connect.runtime.distributed.DistributedHerder:1272) [2021-12-14 01:57:50,791] INFO [Worker clientId=connect-1, groupId=connect-kafka-connect-group] Session key updated (org.apache.kafka.connect.runtime.distributed.DistributedHerder:1582) And also this: WARNING: A provider org.apache.kafka.connect.runtime.rest.resources.ConnectorsResource registered in SERVER runtime does not implement any provider interfaces applicable in the SERVER runtime. Due to constraint configuration problems the provider org.apache.kafka.connect.runtime.rest.resources.ConnectorsResource will be ignored. WARNING: A provider org.apache.kafka.connect.runtime.rest.resources.ConnectorPluginsResource registered in SERVER runtime does not implement any provider interfaces applicable in the SERVER runtime. Due to constraint configuration problems the provider org.apache.kafka.connect.runtime.rest.resources.ConnectorPluginsResource will be ignored. WARNING: The (sub)resource method createConnector in org.apache.kafka.connect.runtime.rest.resources.ConnectorsResource contains empty path annotation. WARNING: The (sub)resource method listConnectors in org.apache.kafka.connect.runtime.rest.resources.ConnectorsResource contains empty path annotation. WARNING: The (sub)resource method listConnectorPlugins in org.apache.kafka.connect.runtime.rest.resources.ConnectorPluginsResource contains empty path annotation. And when I navigate to http://localhost:8083/connectors in my browser, I get an empty list [].
kafka connect - Restating the worker causing rebalance issue
Im using a 2 node Kafka Connect in distributed mode. They are running fine, but the moment when I restart the Worker service, then the connector which was running on that node went to UNASSIGNED then exactly after 5mins it changed to ASSIGNED. I don't know why this is happening, because generally, it has to move that connector's tasks to the other running node right? Here are the logs:(after 5mins from the worker restart) Rebalance started [org.apache.kafka.connect.runtime.distributed.WorkerCoordinator:221] [2021-08-17 07:23:46,120] [INFO] [Worker clientId=connect-1, groupId=debezium-cluster1] (Re-)joining group [org.apache.kafka.clients.consumer.internals.AbstractCoordinator:538] [2021-08-17 07:23:46,124] [INFO] [Worker clientId=connect-1, groupId=debezium-cluster1] Successfully joined group with generation Generation{generationId=27, memberId='connect-1-56d39766-4974-4203-945b-6eee4fe811e7', protocol='sessioned'} [org.apache.kafka.clients.consumer.internals.AbstractCoordinator:594] [2021-08-17 07:23:46,128] [INFO] [Worker clientId=connect-1, groupId=debezium-cluster1] Successfully synced group in generation Generation{generationId=27, memberId='connect-1-56d39766-4974-4203-945b-6eee4fe811e7', protocol='sessioned'} [org.apache.kafka.clients.consumer.internals.AbstractCoordinator:758] [2021-08-17 07:23:46,129] [INFO] [Worker clientId=connect-1, groupId=debezium-cluster1] Joined group at generation 27 with protocol version 2 and got assignment: Assignment{error=0, leader='connect-1-ccdf6d6a-eeab-423c-9611-56795d0deca9', leaderUrl='http://172.30.32.13:8083/', offset=20, connectorIds=[mysql-connector-01], taskIds=[mysql-connector-01-0], revokedConnectorIds=[], revokedTaskIds=[], delay=0} with rebalance delay: 0 [org.apache.kafka.connect.runtime.distributed.DistributedHerder:1694] [2021-08-17 07:23:46,129] [INFO] [Worker clientId=connect-1, groupId=debezium-cluster1] Starting connectors and tasks using config offset 20 [org.apache.kafka.connect.runtime.distributed.DistributedHerder:1244] [2021-08-17 07:23:46,130] [INFO] [Worker clientId=connect-1, groupId=debezium-cluster1] Starting task mysql-connector-01-0 [org.apache.kafka.connect.runtime.distributed.DistributedHerder:1286] [2021-08-17 07:23:46,131] [INFO] [Worker clientId=connect-1, groupId=debezium-cluster1] Starting connector mysql-connector-01 [org.apache.kafka.connect.runtime.distributed.DistributedHerder:1321] I tried to restart the connector, but its not working. curl -X POST 172.30.34.99:8083/connectors/mysql-connector-01/restart {"error_code":409,"message":"Cannot complete request momentarily due to no known leader URL, likely because a rebalance was underway."}
I found the cause for this, Its due to Kafka's scheduled rebalance delay. An awesome blog to know more about it - https://www.confluent.io/blog/incremental-cooperative-rebalancing-in-kafka/
Snowflake kafka connector failing after initializing TEST connector
Not sure what is wrong. Trying to setup Snowflake kafka connect and it seems to be failing without throwing any useful logs [2021-04-07 21:09:25,024] INFO Creating connector TEST_CONNECTOR of type com.snowflake.kafka.connector.SnowflakeSinkConnector (org.apache.kafka.connect.runtime.Worker:202) [2021-04-07 21:09:25,028] INFO Instantiated connector TEST_CONNECTOR with version 1.5.0 of type class com.snowflake.kafka.connector.SnowflakeSinkConnector (org.apache.kafka.connect.runtime.Worker:205) [2021-04-07 21:09:25,029] INFO [SF_KAFKA_CONNECTOR] Snowflake Kafka Connector Version: 1.5.0 (com.snowflake.kafka.connector.Utils:99) [2021-04-07 21:09:25,092] WARN [SF_KAFKA_CONNECTOR] Connector update is available, please upgrade Snowflake Kafka Connector (1.5.0 -> 1.5.2) (com.snowflake.kafka.connector.Utils:136) [2021-04-07 21:09:25,092] INFO [SF_KAFKA_CONNECTOR] SnowflakeSinkConnector:start (com.snowflake.kafka.connector.SnowflakeSinkConnector:91) [2021-04-07 21:09:25,330] INFO [SF_KAFKA_CONNECTOR] initialized the snowflake connection (com.snowflake.kafka.connector.internal.SnowflakeConnectionServiceV1:38) [2021-04-07 21:09:25,336] INFO Finished creating connector TEST_CONNECTOR (org.apache.kafka.connect.runtime.Worker:224) [2021-04-07 21:09:25,337] INFO Skipping reconfiguration of connector sflksink since it is not running (org.apache.kafka.connect.runtime.standalone.StandaloneHerder:285) [2021-04-07 21:09:25,338] ERROR Stopping after connector error (org.apache.kafka.connect.cli.ConnectStandalone:113) java.lang.NullPointerException at org.apache.kafka.connect.cli.ConnectStandalone$1.onCompletion(ConnectStandalone.java:104) at org.apache.kafka.connect.cli.ConnectStandalone$1.onCompletion(ConnectStandalone.java:98) at org.apache.kafka.connect.util.ConvertingFutureCallback.onCompletion(ConvertingFutureCallback.java:44) at org.apache.kafka.connect.runtime.standalone.StandaloneHerder.putConnectorConfig(StandaloneHerder.java:185) at org.apache.kafka.connect.cli.ConnectStandalone.main(ConnectStandalone.java:107) [2021-04-07 21:09:25,340] INFO Kafka Connect stopping (org.apache.kafka.connect.runtime.Connect:65) [2021-04-07 21:09:25,341] INFO Stopping REST server (org.apache.kafka.connect.runtime.rest.RestServer:211) [2021-04-07 21:09:25,345] INFO Stopped http_8083#2cc0fa2a{HTTP/1.1}{0.0.0.0:8083} (org.eclipse.jetty.server.ServerConnector:306) [2021-04-07 21:09:25,354] INFO Stopped o.e.j.s.ServletContextHandler#5c83ae01{/,null,UNAVAILABLE} (org.eclipse.jetty.server.handler.ContextHandler:865) [2021-04-07 21:09:25,360] INFO REST server stopped (org.apache.kafka.connect.runtime.rest.RestServer:222) [2021-04-07 21:09:25,360] INFO Herder stopping (org.apache.kafka.connect.runtime.standalone.StandaloneHerder:77) [2021-04-07 21:09:25,360] INFO Stopping connector TEST_CONNECTOR (org.apache.kafka.connect.runtime.Worker:305) [2021-04-07 21:09:25,361] INFO [SF_KAFKA_CONNECTOR] SnowflakeSinkConnector:stop (com.snowflake.kafka.connector.SnowflakeSinkConnector:141) [2021-04-07 21:09:25,362] INFO Stopped connector TEST_CONNECTOR (org.apache.kafka.connect.runtime.Worker:321) [2021-04-07 21:09:25,362] INFO Worker stopping (org.apache.kafka.connect.runtime.Worker:151) [2021-04-07 21:09:25,365] INFO Stopped FileOffsetBackingStore (org.apache.kafka.connect.storage.FileOffsetBackingStore:67) [2021-04-07 21:09:25,365] INFO Worker stopped (org.apache.kafka.connect.runtime.Worker:172) [2021-04-07 21:09:25,369] INFO Herder stopped (org.apache.kafka.connect.runtime.standalone.StandaloneHerder:87) [2021-04-07 21:09:25,371] INFO Kafka Connect stopped (org.apache.kafka.connect.runtime.Connect:70) The config file looks like below name=sflksink connector.class=com.snowflake.kafka.connector.SnowflakeSinkConnector tasks.max=1 topics=snowflake-connect-test buffer.count.records=10 buffer.flush.time=60 buffer.size.bytes=50 snowflake.url.name=url snowflake.user.name=<user> snowflake.database.name=<database> snowflake.schema.name=<schema> snowflake.private.key=<private_key> snowflake.warehouse.name=MY_WAREHOUSE key.converter=org.apache.kafka.connect.storage.StringConverter value.converter=com.snowflake.kafka.connector.records.SnowflakeJsonConverter Any pointers will be helpful
In the log message it says Connector update is available, please upgrade Snowflake Kafka Connector (1.5.0 -> 1.5.2) So I would suggest that you update your connector JAR to the latest version and try again.
Confluent RabbitMQ Source Connector - configuration, license related error?
our Kafka setup consists of brokers on AWS MSK, Confluent Kafka Connect (confluentinc/cp-kafka-connect:5.5.1) on AWS EKS pod. We are trying to use Confluent RabbitMQ Source Connector (trial version of commercial connector) https://docs.confluent.io/5.5.1/connect/kafka-connect-rabbitmq/index.html and getting below error . Connector Config - { "connector.class": "io.confluent.connect.rabbitmq.RabbitMQSourceConnector", "confluent.topic.bootstrap.servers": "b-1.###.amazonaws.com:9092, b-2.###.amazonaws.com:9092,b-3.###.amazonaws.com:9092,b-4.###.amazonaws.com:9092", "tasks.max": "1", "rabbitmq.password": "user", "rabbitmq.queue": "my_queue", "rabbitmq.username": "pass", "rabbitmq.virtual.host": "/", "rabbitmq.port": "port", "confluent.topic.replication.factor": "1", "rabbitmq.host": "rabbit_host_ip", "name": "Rabbit_Source_RT4", "kafka.topic": "my_topic", "value.converter": "org.apache.kafka.connect.converters.ByteArrayConverter" } GET Connector Status - { "name": "Rabbit_Source_RT4, "connector": { "state": "FAILED", "worker_id": "kfk-connect:8083", "trace": "java.lang.NullPointerException\n\tat io.confluent.license.License.readFully(License.java:195)\n\tat io.confluent.license.License.loadPublicKey(License.java:187)\n\tat io.confluent.license.License.loadPublicKey(License.java:181)\n\tat io.confluent.license.LicenseManager.loadPublicKey(LicenseManager.java:553)\n\tat io.confluent.license.LicenseManager.registerOrValidateLicense(LicenseManager.java:331)\n\tat io.confluent.connect.utils.licensing.ConnectLicenseManager.registerOrValidateLicense(ConnectLicenseManager.java:257)\n\tat io.confluent.connect.rabbitmq.RabbitMQSourceConnector.doStart(RabbitMQSourceConnector.java:62)\n\tat io.confluent.connect.rabbitmq.RabbitMQSourceConnector.start(RabbitMQSourceConnector.java:56)\n\tat org.apache.kafka.connect.runtime.WorkerConnector.doStart(WorkerConnector.java:110)\n\tat org.apache.kafka.connect.runtime.WorkerConnector.start(WorkerConnector.java:135)\n\tat org.apache.kafka.connect.runtime.WorkerConnector.transitionTo(WorkerConnector.java:195)\n\tat org.apache.kafka.connect.runtime.Worker.startConnector(Worker.java:259)\n\tat org.apache.kafka.connect.runtime.distributed.DistributedHerder.startConnector(DistributedHerder.java:1229)\n\tat org.apache.kafka.connect.runtime.distributed.DistributedHerder.access$1300(DistributedHerder.java:127)\n\tat org.apache.kafka.connect.runtime.distributed.DistributedHerder$14.call(DistributedHerder.java:1245)\n\tat org.apache.kafka.connect.runtime.distributed.DistributedHerder$14.call(DistributedHerder.java:1241)\n\tat java.util.concurrent.FutureTask.run(FutureTask.java:266)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)\n\tat java.lang.Thread.run(Thread.java:748)\n" }, "tasks": [], "type": "source" } Connector state failed and no task created. Also tried to update this configuration, but same error everytime. Logs - [2021-01-07 15:21:17,884] INFO Kafka version: 5.5.1-ccs (org.apache.kafka.common.utils.AppInfoParser) [2021-01-07 15:21:17,884] INFO Kafka commitId: a0a0000zzz0a0000 (org.apache.kafka.common.utils.AppInfoParser) [2021-01-07 15:21:17,884] INFO Kafka startTimeMs: 1610032877884 (org.apache.kafka.common.utils.AppInfoParser) [2021-01-07 15:21:17,884] INFO [Producer clientId=Rabbit_Source_RT4-license-manager] Cluster ID: -aAaAzxcvA1a0weaaa11A (org.apache.kafka.clients.Metadata) [2021-01-07 15:21:17,887] INFO [Consumer clientId=Rabbit_Source_RT4-license-manager, groupId=null] Cluster ID: -aAaAzxcvA1a0weaaa11A (org.apache.kafka.clients.Metadata) [2021-01-07 15:21:17,890] INFO [Consumer clientId=Rabbit_Source_RT4-license-manager, groupId=null] Subscribed to partition(s): _confluent-command-0 (org.apache.kafka.clients.consumer.KafkaConsumer) [2021-01-07 15:21:17,890] INFO [Consumer clientId=Rabbit_Source_RT4-license-manager, groupId=null] Seeking to EARLIEST offset of partition _confluent-command-0 (org.apache.kafka.clients.consumer.internals.SubscriptionState) [2021-01-07 15:21:17,899] INFO [Consumer clientId=Rabbit_Source_RT4-license-manager, groupId=null] Resetting offset for partition _confluent-command-0 to offset 0. (org.apache.kafka.clients.consumer.internals.SubscriptionState) [2021-01-07 15:21:17,900] INFO Finished reading KafkaBasedLog for topic _confluent-command (org.apache.kafka.connect.util.KafkaBasedLog) [2021-01-07 15:21:17,900] INFO Started KafkaBasedLog for topic _confluent-command (org.apache.kafka.connect.util.KafkaBasedLog) [2021-01-07 15:21:17,900] INFO Started License Store (io.confluent.license.LicenseStore) [2021-01-07 15:21:17,901] INFO Validating Confluent License (io.confluent.connect.utils.licensing.ConnectLicenseManager) [2021-01-07 15:21:17,906] INFO Closing License Store (io.confluent.license.LicenseStore) [2021-01-07 15:21:17,906] INFO Stopping KafkaBasedLog for topic _confluent-command (org.apache.kafka.connect.util.KafkaBasedLog) [2021-01-07 15:21:17,908] INFO [Producer clientId=Rabbit_Source_RT4-license-manager] Closing the Kafka producer with timeoutMillis = 9223372036854775807 ms. (org.apache.kafka.clients.producer.KafkaProducer) [2021-01-07 15:21:17,910] INFO Stopped KafkaBasedLog for topic _confluent-command (org.apache.kafka.connect.util.KafkaBasedLog) [2021-01-07 15:21:17,910] INFO Closed License Store (io.confluent.license.LicenseStore) [2021-01-07 15:21:17,910] ERROR WorkerConnector{id=Rabbit_Source_RT4} Error while starting connector (org.apache.kafka.connect.runtime.WorkerConnector) java.lang.NullPointerException at io.confluent.license.License.readFully(License.java:195) at io.confluent.license.License.loadPublicKey(License.java:187) at io.confluent.license.License.loadPublicKey(License.java:181) at io.confluent.license.LicenseManager.loadPublicKey(LicenseManager.java:553) at io.confluent.license.LicenseManager.registerOrValidateLicense(LicenseManager.java:331) at io.confluent.connect.utils.licensing.ConnectLicenseManager.registerOrValidateLicense(ConnectLicenseManager.java:257) at io.confluent.connect.rabbitmq.RabbitMQSourceConnector.doStart(RabbitMQSourceConnector.java:62) at io.confluent.connect.rabbitmq.RabbitMQSourceConnector.start(RabbitMQSourceConnector.java:56) at org.apache.kafka.connect.runtime.WorkerConnector.doStart(WorkerConnector.java:110) at org.apache.kafka.connect.runtime.WorkerConnector.start(WorkerConnector.java:135) at org.apache.kafka.connect.runtime.WorkerConnector.transitionTo(WorkerConnector.java:195) at org.apache.kafka.connect.runtime.Worker.startConnector(Worker.java:259) at org.apache.kafka.connect.runtime.distributed.DistributedHerder.startConnector(DistributedHerder.java:1229) at org.apache.kafka.connect.runtime.distributed.DistributedHerder.access$1300(DistributedHerder.java:127) at org.apache.kafka.connect.runtime.distributed.DistributedHerder$14.call(DistributedHerder.java:1245) at org.apache.kafka.connect.runtime.distributed.DistributedHerder$14.call(DistributedHerder.java:1241) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) [2021-01-07 15:21:17,913] INFO Finished creating connector Rabbit_Source_RT4 (org.apache.kafka.connect.runtime.Worker) [2021-01-07 15:21:17,913] INFO [Worker clientId=connect-1, groupId=compose-kfk-connect-group] Skipping reconfiguration of connector Rabbit_Source_RT4 since it is not running (org.apache.kafka.connect.runtime.distributed.DistributedHerder) [2021-01-07 15:21:17,913] INFO [Worker clientId=connect-1, groupId=compose-kfk-connect-group] Finished starting connectors and tasks (org.apache.kafka.connect.runtime.distributed.DistributedHerder) Output of GET /connector-plugins request contains - {"class":"io.confluent.connect.rabbitmq.RabbitMQSourceConnector","type":"source","version":"0.0.0.0"}, Also checked and found that '_confluent-command' topic does not contain any messages. Is it because of trial version is over and an Enterprise license will be needed OR due to some error in configuration ? How to verify duration remaining on trial version (since we are not using Control Center) ? Thanks in advance.
org.apache.kafka.connect.runtime.rest.errors.BadRequestException
I am trying to write a kafka connector to move data that is in kafka topic into mongodb(sink). For i have added required configurations in connect-json-standalone.properties file and also in connect-mongo-sink.properties file in kafka folder. In this process while starting the connector I am getting below exception [2019-07-23 18:07:17,274] INFO Started o.e.j.s.ServletContextHandler#76e3b45b{/,null,AVAILABLE} (org.eclipse.jetty.server.handler.ContextHandler:855) [2019-07-23 18:07:17,274] INFO REST resources initialized; server is started and ready to handle requests (org.apache.kafka.connect.runtime.rest.RestServer:231) [2019-07-23 18:07:17,274] INFO Kafka Connect started (org.apache.kafka.connect.runtime.Connect:56) [2019-07-23 18:07:17,635] INFO Cluster created with settings {hosts=[localhost:27017], mode=MULTIPLE, requiredClusterType=UNKNOWN, serverSelectionTimeout='30000 ms', maxWaitQueueSize=500} (org.mongodb.driver.cluster:71) [2019-07-23 18:07:17,636] INFO Adding discovered server localhost:27017 to client view of cluster (org.mongodb.driver.cluster:71) [2019-07-23 18:07:17,760] INFO Closing all connections to repracli/localhost:27017 (io.debezium.connector.mongodb.ConnectionContext:86) [2019-07-23 18:07:17,768] ERROR Failed to create job for ./etc/kafka/connect-mongodb-sink.properties (org.apache.kafka.connect.cli.ConnectStandalone:104) [2019-07-23 18:07:17,769] ERROR Stopping after connector error (org.apache.kafka.connect.cli.ConnectStandalone:115) java.util.concurrent.ExecutionException: org.apache.kafka.connect.runtime.rest.errors.BadRequestException: Connector configuration is invalid and contains the following 1 error(s): A value is required You can also find the above list of errors at the endpoint `/{connectorType}/config/validate` at org.apache.kafka.connect.util.ConvertingFutureCallback.result(ConvertingFutureCallback.java:79) at org.apache.kafka.connect.util.ConvertingFutureCallback.get(ConvertingFutureCallback.java:66) at org.apache.kafka.connect.cli.ConnectStandalone.main(ConnectStandalone.java:112) Caused by: org.apache.kafka.connect.runtime.rest.errors.BadRequestException: Connector configuration is invalid and contains the following 1 error(s): A value is required You can also find the above list of errors at the endpoint `/{connectorType}/config/validate` at org.apache.kafka.connect.runtime.AbstractHerder.maybeAddConfigErrors(AbstractHerder.java:423) at org.apache.kafka.connect.runtime.standalone.StandaloneHerder.putConnectorConfig(StandaloneHerder.java:188) at org.apache.kafka.connect.cli.ConnectStandalone.main(ConnectStandalone.java:109) [2019-07-23 18:07:17,782] INFO Kafka Connect stopping (org.apache.kafka.connect.runtime.Connect:66) [2019-07-23 18:07:17,782] INFO Stopping REST server (org.apache.kafka.connect.runtime.rest.RestServer:239) [2019-07-23 18:07:17,790] INFO Stopped http_localhost8084#5f96f6a2{HTTP/1.1,[http/1.1]}{localhost:8084} (org.eclipse.jetty.server.AbstractConnector:341) [2019-07-23 18:07:17,790] INFO node0 Stopped scavenging (org.eclipse.jetty.server.session:167) [2019-07-23 18:07:17,792] INFO REST server stopped (org.apache.kafka.connect.runtime.rest.RestServer:256) [2019-07-23 18:07:17,793] INFO Herder stopping (org.apache.kafka.connect.runtime.standalone.StandaloneHerder:94) [2019-07-23 18:07:17,793] INFO Worker stopping (org.apache.kafka.connect.runtime.Worker:185) [2019-07-23 18:07:17,794] INFO Stopped FileOffsetBackingStore (org.apache.kafka.connect.storage.FileOffsetBackingStore:66) [2019-07-23 18:07:17,796] INFO Worker stopped (org.apache.kafka.connect.runtime.Worker:206) [2019-07-23 18:07:17,799] INFO Herder stopped (org.apache.kafka.connect.runtime.standalone.StandaloneHerder:111) [2019-07-23 18:07:17,800] INFO Kafka Connect stopped (org.apache.kafka.connect.runtime.Connect:71) I have tried to solve it by changing connection.uri in connect-mongo-sink.properties in several ways which didn't worked out well. I have also googled some links that also didn't solved my problem. referal_link : https://groups.google.com/forum/#!topic/debezium/bC4TUld5NGw https://github.com/confluentinc/kafka-connect-jdbc/issues/334 connect-json-standalone.properties: key.converter=org.apache.kafka.connect.json.JsonConverter key.converter.schema.registry.url=http://localhost:8081 value.converter=org.apache.kafka.connect.json.JsonConverter value.converter.schema.registry.url=http://localhost:8081 internal.key.converter=org.apache.kafka.connect.json.JsonConverter internal.value.converter=org.apache.kafka.connect.json.JsonConverter internal.key.converter.schemas.enable=false internal.value.converter.schemas.enable=false connect-mongo-sink.properties: name=mongodb-sink-connector connector.class=io.debezium.connector.mongodb.MongoDbConnector tasks.max=1 topics=sample-consumerr-sink-topic type.name=kafka-connect mongodb.hosts=repracli/localhost:27017 mongodb.collection=conn_mongo_sink_collc mongodb.connection.uri=mongodb://localhost:27017/conn_mongo_sink_db?w=1&journal=true I want the sink connector to work inorder to consume topic data into mongodb collection name "conn_mongo_sink_collc". Can anyone help me how to resolve that error ? Note : I am using 3 replicaSet mongodb in which port-27017 is primary , 27018-secondary, 27019-secondary.
io.debezium.connector.mongodb.MongoDbConnector is a Source connector, for getting data from MongoDB into Kafka. To stream data from MongoDB into Kafka use a Sink connector. MongoDB recently launched their own sink connector and blogged about its use, including sample configuration.
Faced same issue. This can also happen when connector config is not complete or invalid. check whether the necessary properties are set by looking debezium documentation