kafka connect - Restating the worker causing rebalance issue - apache-kafka

Im using a 2 node Kafka Connect in distributed mode. They are running fine, but the moment when I restart the Worker service, then the connector which was running on that node went to UNASSIGNED then exactly after 5mins it changed to ASSIGNED. I don't know why this is happening, because generally, it has to move that connector's tasks to the other running node right?
Here are the logs:(after 5mins from the worker restart)
Rebalance started [org.apache.kafka.connect.runtime.distributed.WorkerCoordinator:221]
[2021-08-17 07:23:46,120] [INFO] [Worker clientId=connect-1, groupId=debezium-cluster1] (Re-)joining group [org.apache.kafka.clients.consumer.internals.AbstractCoordinator:538]
[2021-08-17 07:23:46,124] [INFO] [Worker clientId=connect-1, groupId=debezium-cluster1] Successfully joined group with generation Generation{generationId=27, memberId='connect-1-56d39766-4974-4203-945b-6eee4fe811e7', protocol='sessioned'} [org.apache.kafka.clients.consumer.internals.AbstractCoordinator:594]
[2021-08-17 07:23:46,128] [INFO] [Worker clientId=connect-1, groupId=debezium-cluster1] Successfully synced group in generation Generation{generationId=27, memberId='connect-1-56d39766-4974-4203-945b-6eee4fe811e7', protocol='sessioned'} [org.apache.kafka.clients.consumer.internals.AbstractCoordinator:758]
[2021-08-17 07:23:46,129] [INFO] [Worker clientId=connect-1, groupId=debezium-cluster1] Joined group at generation 27 with protocol version 2 and got assignment: Assignment{error=0, leader='connect-1-ccdf6d6a-eeab-423c-9611-56795d0deca9', leaderUrl='http://172.30.32.13:8083/', offset=20, connectorIds=[mysql-connector-01], taskIds=[mysql-connector-01-0], revokedConnectorIds=[], revokedTaskIds=[], delay=0} with rebalance delay: 0 [org.apache.kafka.connect.runtime.distributed.DistributedHerder:1694]
[2021-08-17 07:23:46,129] [INFO] [Worker clientId=connect-1, groupId=debezium-cluster1] Starting connectors and tasks using config offset 20 [org.apache.kafka.connect.runtime.distributed.DistributedHerder:1244]
[2021-08-17 07:23:46,130] [INFO] [Worker clientId=connect-1, groupId=debezium-cluster1] Starting task mysql-connector-01-0 [org.apache.kafka.connect.runtime.distributed.DistributedHerder:1286]
[2021-08-17 07:23:46,131] [INFO] [Worker clientId=connect-1, groupId=debezium-cluster1] Starting connector mysql-connector-01 [org.apache.kafka.connect.runtime.distributed.DistributedHerder:1321]
I tried to restart the connector, but its not working.
curl -X POST 172.30.34.99:8083/connectors/mysql-connector-01/restart
{"error_code":409,"message":"Cannot complete request momentarily due to no known leader URL, likely because a rebalance was underway."}

I found the cause for this, Its due to Kafka's scheduled rebalance delay. An awesome blog to know more about it - https://www.confluent.io/blog/incremental-cooperative-rebalancing-in-kafka/

Related

Running Kafka Connect in distributed mode, no obvious errors, but data does not end up in sink connector

Running Kafka Connect via this repo https://github.com/entechlog/kafka-examples/tree/master/kafka-connect-standalone except I have added extra configs for AWS MSK IAM authentication. I've also updated the .env file to use different variables, like the AWS MSK IAM jar file, AWS key/secret key credentials, and a few other minor things. Note that this repo runs in standalone mode, but I have updated the launch shell script to run in distributed mode: exec connect-distributed /etc/"${COMPONENT}"/"${COMPONENT}".properties. But I have NOT created a file called kafka-connect.properties.template because when I do, I get a whole host of errors like Missing required configuration "group.id" which has no default value. which makes no sense to me as I can see it in the docker-compose.yml file.
My goal is to get data from a third party Kafka cluster into BigQuery.
When I run my docker-compose file, I see no errors, a few warnings, but nothing that stands out to me. I get a lot of warnings like this: [2021-12-14 01:57:37,917] WARN The configuration 'camelcase.default.dataset' was supplied but isn't a known config. (org.apache.kafka.clients.consumer.ConsumerConfig:380), and this dataset is required to send data to BigQuery. Not making sense why things like dataset are not being used.
Here are the latest logs:
[2021-12-14 01:57:37,921] INFO Kafka version: 6.2.1-ce (org.apache.kafka.common.utils.AppInfoParser:119)
[2021-12-14 01:57:37,922] INFO Kafka commitId: 14770bfc4e973178 (org.apache.kafka.common.utils.AppInfoParser:120)
[2021-12-14 01:57:37,922] INFO Kafka startTimeMs: 1639447057921 (org.apache.kafka.common.utils.AppInfoParser:121)
[2021-12-14 01:57:39,332] INFO [Producer clientId=producer-3] Cluster ID: k2eIXxm_RkmWu2-R2d0N1Q (org.apache.kafka.clients.Metadata:279)
[2021-12-14 01:57:39,413] INFO [Consumer clientId=consumer-connect-kafka-connect-group-3, groupId=connect-kafka-connect-group] Cluster ID: k2eIXxm_RkmWu2-R2d0N1Q (org.apache.kafka.clients.Metadata:279)
[2021-12-14 01:57:39,428] INFO [Consumer clientId=consumer-connect-kafka-connect-group-3, groupId=connect-kafka-connect-group] Subscribed to partition(s): connect-configs-0 (org.apache.kafka.clients.consumer.KafkaConsumer:1123)
[2021-12-14 01:57:39,428] INFO [Consumer clientId=consumer-connect-kafka-connect-group-3, groupId=connect-kafka-connect-group] Seeking to EARLIEST offset of partition connect-configs-0 (org.apache.kafka.clients.consumer.internals.SubscriptionState:619)
[2021-12-14 01:57:40,727] INFO Finished reading KafkaBasedLog for topic connect-configs (org.apache.kafka.connect.util.KafkaBasedLog:228)
[2021-12-14 01:57:40,728] INFO Started KafkaBasedLog for topic connect-configs (org.apache.kafka.connect.util.KafkaBasedLog:230)
[2021-12-14 01:57:40,729] INFO Started KafkaConfigBackingStore (org.apache.kafka.connect.storage.KafkaConfigBackingStore:290)
[2021-12-14 01:57:40,729] INFO [Worker clientId=connect-1, groupId=connect-kafka-connect-group] Herder started (org.apache.kafka.connect.runtime.distributed.DistributedHerder:312)
[2021-12-14 01:57:45,059] INFO [Worker clientId=connect-1, groupId=connect-kafka-connect-group] Cluster ID: k2eIXxm_RkmWu2-R2d0N1Q (org.apache.kafka.clients.Metadata:279)
[2021-12-14 01:57:45,089] INFO [Worker clientId=connect-1, groupId=connect-kafka-connect-group] Discovered group coordinator <bootstrap server and port here> (id: 2147483643 rack: null) (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:848)
[2021-12-14 01:57:45,095] INFO [Worker clientId=connect-1, groupId=connect-kafka-connect-group] Rebalance started (org.apache.kafka.connect.runtime.distributed.WorkerCoordinator:221)
[2021-12-14 01:57:45,095] INFO [Worker clientId=connect-1, groupId=connect-kafka-connect-group] (Re-)joining group (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:538)
[2021-12-14 01:57:45,957] INFO [Worker clientId=connect-1, groupId=connect-kafka-connect-group] (Re-)joining group (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:538)
[2021-12-14 01:57:49,038] INFO [Worker clientId=connect-1, groupId=connect-kafka-connect-group] Successfully joined group with generation Generation{generationId=1, memberId='connect-1-a4cc5355-60da-46c3-8228-bff2de664f2c', protocol='sessioned'} (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:594)
[2021-12-14 01:57:49,160] INFO [Worker clientId=connect-1, groupId=connect-kafka-connect-group] Successfully synced group in generation Generation{generationId=1, memberId='connect-1-a4cc5355-60da-46c3-8228-bff2de664f2c', protocol='sessioned'} (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:758)
[2021-12-14 01:57:49,161] INFO [Worker clientId=connect-1, groupId=connect-kafka-connect-group] Joined group at generation 1 with protocol version 2 and got assignment: Assignment{error=0, leader='connect-1-a4cc5355-60da-46c3-8228-bff2de664f2c', leaderUrl='http://kafka-connect:8083/', offset=10, connectorIds=[], taskIds=[], revokedConnectorIds=[], revokedTaskIds=[], delay=0} with rebalance delay: 0 (org.apache.kafka.connect.runtime.distributed.DistributedHerder:1694)
[2021-12-14 01:57:49,162] WARN [Worker clientId=connect-1, groupId=connect-kafka-connect-group] Catching up to assignment's config offset. (org.apache.kafka.connect.runtime.distributed.DistributedHerder:1119)
[2021-12-14 01:57:49,162] INFO [Worker clientId=connect-1, groupId=connect-kafka-connect-group] Current config state offset -1 is behind group assignment 10, reading to end of config log (org.apache.kafka.connect.runtime.distributed.DistributedHerder:1183)
[2021-12-14 01:57:49,359] INFO [Worker clientId=connect-1, groupId=connect-kafka-connect-group] Finished reading to end of log and updated config snapshot, new config log offset: 10 (org.apache.kafka.connect.runtime.distributed.DistributedHerder:1190)
[2021-12-14 01:57:49,359] INFO [Worker clientId=connect-1, groupId=connect-kafka-connect-group] Starting connectors and tasks using config offset 10 (org.apache.kafka.connect.runtime.distributed.DistributedHerder:1244)
[2021-12-14 01:57:49,359] INFO [Worker clientId=connect-1, groupId=connect-kafka-connect-group] Finished starting connectors and tasks (org.apache.kafka.connect.runtime.distributed.DistributedHerder:1272)
[2021-12-14 01:57:50,791] INFO [Worker clientId=connect-1, groupId=connect-kafka-connect-group] Session key updated (org.apache.kafka.connect.runtime.distributed.DistributedHerder:1582)
And also this:
WARNING: A provider org.apache.kafka.connect.runtime.rest.resources.ConnectorsResource registered in SERVER runtime does not implement any provider interfaces applicable in the SERVER runtime. Due to constraint configuration problems the provider org.apache.kafka.connect.runtime.rest.resources.ConnectorsResource will be ignored.
WARNING: A provider org.apache.kafka.connect.runtime.rest.resources.ConnectorPluginsResource registered in SERVER runtime does not implement any provider interfaces applicable in the SERVER runtime. Due to constraint configuration problems the provider org.apache.kafka.connect.runtime.rest.resources.ConnectorPluginsResource will be ignored.
WARNING: The (sub)resource method createConnector in org.apache.kafka.connect.runtime.rest.resources.ConnectorsResource contains empty path annotation.
WARNING: The (sub)resource method listConnectors in org.apache.kafka.connect.runtime.rest.resources.ConnectorsResource contains empty path annotation.
WARNING: The (sub)resource method listConnectorPlugins in org.apache.kafka.connect.runtime.rest.resources.ConnectorPluginsResource contains empty path annotation.
And when I navigate to http://localhost:8083/connectors in my browser, I get an empty list [].

Kafka JDBC Sink Connector: no tasks assigned

I try to start JDBC sink connector with following configuration:
{
"name": "crm_data-sink_hh",
"config": {
"connector.class": "io.confluent.connect.jdbc.JdbcSinkConnector",
"tasks.max": 6,
"topics": "crm_account,crm_competitor,crm_event,crm_event_participation",
"connection.url": "jdbc:postgresql://db_host/hh?prepareThreshold=0",
"connection.user": "db_user",
"connection.password": "${file:db_hh_kafka_connect_pass}",
"dialect.name": "PostgreSqlDatabaseDialect",
"insert.mode": "upsert",
"pk.mode": "record_value",
"pk.fields": "guid",
"errors.tolerance": "all",
"errors.log.enable":true,
"errors.log.include.messages":true,
"errors.deadletterqueue.topic.name":"crm_data_deadletterqueue",
"errors.deadletterqueue.context.headers.enable":true
}
}
But no tasks are running while connector is in running state:
curl -X GET http://kafka-connect:10900/connectors/crm_data-sink_hh/status
{"name":"crm_data-sink_hh","connector":{"state":"RUNNING","worker_id":"172.16.24.14:10900"},"tasks":[],"type":"sink"}
I faced this issue many times, but I'm very confused because it happens randomly. My question is very similar with this question. I would appreciate any help!
Update. 11/04/2019 (unfortunately, now I have only INFO level log)
Finally, after few attempts I started connector with running tasks by updating config of the existing connector crm_data-sink_db_hh:
$ curl -X GET http://docker61:10900/connectors/crm_data-sink_db_hh/status
{"name":"crm_data-sink_db_hh","connector":{"state":"RUNNING","worker_id":"192.168.1.198:10900"},"tasks":[],"type":"sink"}
$ curl -X GET http://docker61:10900/connectors/crm_data-sink_db_hh/status
{"name":"crm_data-sink_db_hh","connector":{"state":"RUNNING","worker_id":"192.168.1.198:10900"},"tasks":[],"type":"sink"}
$ curl -X PUT -d #new_config.json http://docker21:10900/connectors/crm_data-sink_db_hh/config -H 'Content-Type: application/json'
$ curl -X GET http://docker61:10900/connectors/crm_data-sink_db_hh/status
{"name":"crm_data-sink_db_hh","connector":{"state":"UNASSIGNED","worker_id":"192.168.1.198:10900"},"tasks":[],"type":"sink"}
$ curl -X GET http://docker61:10900/connectors/crm_data-sink_db_hh/status
{"name":"crm_data-sink_db_hh","connector":{"state":"RUNNING","worker_id":"172.16.36.11:10900"},"tasks":[{"state":"UNASSIGNED","id":0,"worker_id":"172.16.32.11:10900"},{"state":"UNASSIGNED","id":1,"worker_id":"172.16.32.11:10900"},{"state":"RUNNING","id":2,"worker_id":"192.168.2.243:10900"},{"state":"UNASSIGNED","id":3,"worker_id":"172.16.32.11:10900"},{"state":"UNASSIGNED","id":4,"worker_id":"172.16.32.11:10900"}],"type":"sink"}
$ curl -X GET http://docker61:10900/connectors/crm_data-sink_db_hh/status
{"name":"crm_data-sink_db_hh","connector":{"state":"RUNNING","worker_id":"192.168.1.198:10900"},"tasks":[{"state":"RUNNING","id":0,"worker_id":"192.168.1.198:10900"},{"state":"RUNNING","id":1,"worker_id":"192.168.1.198:10900"},{"state":"RUNNING","id":2,"worker_id":"192.168.1.198:10900"},{"state":"RUNNING","id":3,"worker_id":"192.168.1.198:10900"},{"state":"RUNNING","id":4,"worker_id":"192.168.1.198:10900"},{"state":"RUNNING","id":5,"worker_id":"192.168.1.198:10900"}],"type":"sink"}
Log:
[2019-04-11 16:02:15,167] INFO Connector crm_data-sink_db_hh config updated (org.apache.kafka.connect.runtime.distributed.DistributedHerder)
[2019-04-11 16:02:15,668] INFO Rebalance started (org.apache.kafka.connect.runtime.distributed.DistributedHerder)
[2019-04-11 16:02:15,668] INFO Stopping connector crm_data-source (org.apache.kafka.connect.runtime.Worker)
[2019-04-11 16:02:15,668] INFO Stopping task crm_data-source-0 (org.apache.kafka.connect.runtime.Worker)
[2019-04-11 16:02:15,668] INFO Stopping connector crm_data-sink_pandora (org.apache.kafka.connect.runtime.Worker)
[2019-04-11 16:02:15,668] INFO Stopping JDBC source task (io.confluent.connect.jdbc.source.JdbcSourceTask)
[2019-04-11 16:02:15,668] INFO Stopping table monitoring thread (io.confluent.connect.jdbc.JdbcSourceConnector)
...
Stopping connectors and tasks
...
[2019-04-11 16:02:17,373] INFO 192.168.1.91 - - [11/Apr/2019:13:02:14 +0000] "POST /connectors HTTP/1.1" 201 768 2468 (org.apache.kafka.connect.runtime.rest.RestServer)
[2019-04-11 16:02:20,668] ERROR Graceful stop of task crm_data-source-1 failed. (org.apache.kafka.connect.runtime.Worker)
[2019-04-11 16:02:20,669] ERROR Graceful stop of task crm_data-source-0 failed. (org.apache.kafka.connect.runtime.Worker)
[2019-04-11 16:02:20,669] ERROR Graceful stop of task crm_data-source-3 failed. (org.apache.kafka.connect.runtime.Worker)
[2019-04-11 16:02:20,669] ERROR Graceful stop of task crm_data-source-2 failed. (org.apache.kafka.connect.runtime.Worker)
[2019-04-11 16:02:20,669] INFO Finished stopping tasks in preparation for rebalance (org.apache.kafka.connect.runtime.distributed.DistributedHerder)
[2019-04-11 16:02:20,669] INFO [Worker clientId=connect-1, groupId=21] (Re-)joining group (org.apache.kafka.clients.consumer.internals.AbstractCoordinator)
[2019-04-11 16:02:20,681] INFO Tasks [crm_data-sink_hhru-0, crm_data-sink_hhru-3, crm_data-sink_hhru-4, crm_data-sink_hhru-1, crm_data-sink_hhru-2, crm_data-sink_hhru-5, crm_data-pandora_sink-0, crm_data-pandora_sink-2, crm_data-pandora_sink-1, crm_data-pandora_sink-4, crm_data-pandora_sink-3, crm_data-pandora_sink-03-0, crm_data-pandora_sink-03-2, crm_data-pandora_sink-03-1, crm_data-pandora_sink-00-1, crm_data-pandora_sink-00-0, crm_data-pandora_sink-00-3, crm_data-pandora_sink-00-2, crcrm_data-pandora_sink-00-4, crm_data-sink_hh-00-0, crm_data-sink_hh-00-1, crm_data-sink_hh-00-2, crm_data-pandora_sink-test-3, crm_data-pandora_sink-test-2, crm_data-pandora_sink-test-4,crm_data-pandora_sink-01-2, crm_data-pandora_sink-01-1, crm_data-pandora_sink-01-0, crm_data-source-3, crm_data-source-2, crm_data-source-1, crm_data-source-0, crm_data-sink_db_hh-0, crm_data-sink_db_hh-1, crm_data-sink_db_hh-2, crm_data-sink_hh-01-0, crm_data-sink_hh-01-1, crm_data-sink_hh-01-2, crm_data-sink_hh-01-3, crm_data-sink_hh-00-3, crm_data-sink_hh-00-4, crm_data-sink_hh-00-5, crm_data-sink_hh-1, crm_data-sink_hh-0, crm_data-sink_hh-3, crm_data-sink_hh-2, crm_data-sink_hh-5, crm_data-sink_hh-4, crm_data-sink_pandora-5, crm_data-sink_pandora-0, crm_data-sink_pandora-1, crm_data-sink_pandora-2, crm_data-sink_pandora-3, crm_data_account_on_competitors-source-0, crm_data-sink_pandora-4] configs updated (org.apache.kafka.connect.runtime.distributed.DistributedHerder)
[2019-04-11 16:02:20,681] INFO Tasks [] configs updated (org.apache.kafka.connect.runtime.distributed.DistributedHerder)
[2019-04-11 16:02:20,682] INFO Tasks [] configs updated (org.apache.kafka.connect.runtime.distributed.DistributedHerder)
[2019-04-11 16:02:20,683] INFO Tasks [] configs updated (org.apache.kafka.connect.runtime.distributed.DistributedHerder)
[2019-04-11 16:02:20,684] INFO Tasks [] configs updated (org.apache.kafka.connect.runtime.distributed.DistributedHerder)
[2019-04-11 16:02:20,685] INFO [Worker clientId=connect-1, groupId=21] Successfully joined group with generation 2206465 (org.apache.kafka.clients.consumer.internals.AbstractCoordinator)
[2019-04-11 16:02:20,685] INFO Joined group and got assignment: Assignment{error=0, leader='connect-1-57140c1d-3b19-4fc0-b4ca-e6ce272e1924', leaderUrl='http://192.168.1.198:10900/', offset=1168, connectorIds=[crm_data-sink_db_hh, crm_data-source, crm_data-sink_pandora], taskIds=[crm_data-source-0, crm_data-source-1, crm_data-source-2, crm_data-source-3, crm_data-sink_pandora-0, crm_data-sink_pandora-1, crm_data-sink_pandora-2, crm_data-sink_pandora-3, crm_data-sink_pandora-4, crm_data-sink_pandora-5]} (org.apache.kafka.connect.runtime.distributed.DistributedHerder)
[2019-04-11 16:02:20,685] INFO Starting connectors and tasks using config offset 1168 (org.apache.kafka.connect.runtime.distributed.DistributedHerder)
[2019-04-11 16:02:20,685] INFO Starting connector crm_data-sink_db_hh (org.apache.kafka.connect.runtime.distributed.DistributedHerder)
[2019-04-11 16:02:20,685] INFO Starting connector crm_data-source (org.apache.kafka.connect.runtime.distributed.DistributedHerder)
[2019-04-11 16:02:20,685] INFO Starting connector crm_data-sink_pandora (org.apache.kafka.connect.runtime.distributed.DistributedHerder)
[2019-04-11 16:02:20,685] INFO Starting task crm_data-source-0 (org.apache.kafka.connect.runtime.distributed.DistributedHerder)
...
Starting connectors and tasks
...
Update. 12/04/2019
I increased log level and reproduced the issue. I see a lot of records for different tasks (tasks of already deleted connectors or not yet running tasks) like this:
[2019-04-12 15:14:32,360] DEBUG Storing new config for task crm_data-sink_hh-3 this will wait for a commit message before the new config will take effect. New config: {...} (org.apache.kafka.connect.storage.KafkaConfigBackingStore)
There are tasks of deleted connectors in the task list - is it ok? The same situation in the internal topics of Kafka Connect.
My main question: why the connector did not fail if there are no tasks are running for any reason? Since the connector does not work actually in this situation.
It looks like a bug of Kafka Connect itself. There is a Kafka Jira ticket about this issue.

ProducerFencedException Processing Kafka Stream

I'm using kafka 1.1.0. A kafka stream consistently throws this exception (albeit with different messages)
WARN o.a.k.s.p.i.RecordCollectorImpl#onCompletion:166 - task [0_0] Error sending record (key KEY value VALUE timestamp TIMESTAMP) to topic OUTPUT_TOPIC due to Producer attempted an operation with an old epoch. Either there is a newer producer with the same transactionalId, or the producer's transaction has been expired by the broker.; No more records will be sent and no more offsets will be recorded for this task.
WARN o.a.k.s.p.i.AssignedStreamsTasks#closeZombieTask:202 - stream-thread [90556797-3a33-4e35-9754-8a63200dc20e-StreamThread-1] stream task 0_0 got migrated to another thread already. Closing it as zombie.
WARN o.a.k.s.p.internals.StreamThread#runLoop:752 - stream-thread [90556797-3a33-4e35-9754-8a63200dc20e-StreamThread-1] Detected a task that got migrated to another thread. This implies that this thread missed a rebalance and dropped out of the consumer group. Trying to rejoin the consumer group now.
org.apache.kafka.streams.errors.TaskMigratedException: StreamsTask taskId: 0_0
ProcessorTopology:
KSTREAM-SOURCE-0000000000:
topics:
[INPUT_TOPIC]
children: [KSTREAM-PEEK-0000000001]
KSTREAM-PEEK-0000000001:
children: [KSTREAM-MAP-0000000002]
KSTREAM-MAP-0000000002:
children: [KSTREAM-SINK-0000000003]
KSTREAM-SINK-0000000003:
topic:
OUTPUT_TOPIC
Partitions [INPUT_TOPIC-0]
at org.apache.kafka.streams.processor.internals.StreamTask.process(StreamTask.java:238)
at org.apache.kafka.streams.processor.internals.AssignedStreamsTasks.process(AssignedStreamsTasks.java:94)
at org.apache.kafka.streams.processor.internals.TaskManager.process(TaskManager.java:411)
at org.apache.kafka.streams.processor.internals.StreamThread.processAndMaybeCommit(StreamThread.java:918)
at org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:798)
at org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:750)
at org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:720)
Caused by: org.apache.kafka.common.errors.ProducerFencedException: task [0_0] Abort sending since producer got fenced with a previous record
I'm not sure what is causing this exception. When I restart application it appears to successfully process a few records before failing with the same exception. Strangely enough, the records are successfully processed several times even though the stream is set to exactly once processing. Here is the stream configuration:
Properties streamProperties = new Properties();
streamProperties.put(StreamsConfig.CACHE_MAX_BYTES_BUFFERING_CONFIG, 0);
streamProperties.put(StreamsConfig.APPLICATION_ID_CONFIG, service.getName());
streamProperties.put(StreamsConfig.PROCESSING_GUARANTEE_CONFIG, "exactly_once");
//Should be DEFAULT_PRODUCTION_EXCEPTION_HANDLER_CLASS_CONFIG - but that field is private.
streamProperties.put("default.production.exception.handler", ErrorHandler.class);
streamProperties.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, brokerUrl);
streamProperties.put(StreamsConfig.REPLICATION_FACTOR_CONFIG, 3);
streamProperties.put(ConsumerConfig.MAX_POLL_RECORDS_CONFIG, 10);
streamProperties.put(KafkaAvroDeserializerConfig.SCHEMA_REGISTRY_URL_CONFIG, schemaRegistryUrl);
streamProperties.put(KafkaAvroDeserializerConfig.SPECIFIC_AVRO_READER_CONFIG, true);
Out of the three servers, only two generate relevant logs when restarting the streams application. Here are logs from the first server:
[2018-05-09 14:42:14,635] INFO [GroupCoordinator 1]: Member INPUT_TOPIC-09dd8ac8-2cd6-4dd1-b963-63ea804c8fcc-StreamThread-1-consumer-3fedb398-91fe-480a-b5ee-1b5879d0956c in group INPUT_TOPIC has failed, removing it from the group (kafka.coordinator.group.GroupCoordinator)
[2018-05-09 14:42:14,636] INFO [GroupCoordinator 1]: Preparing to rebalance group INPUT_TOPIC with old generation 1 (__consumer_offsets-29) (kafka.coordinator.group.GroupCoordinator)
[2018-05-09 14:42:14,636] INFO [GroupCoordinator 1]: Group INPUT_TOPIC with generation 2 is now empty (__consumer_offsets-29) (kafka.coordinator.group.GroupCoordinator)
[2018-05-09 14:42:15,848] INFO [GroupCoordinator 1]: Preparing to rebalance group INPUT_TOPIC with old generation 2 (__consumer_offsets-29) (kafka.coordinator.group.GroupCoordinator)
[2018-05-09 14:42:15,848] INFO [GroupCoordinator 1]: Stabilized group INPUT_TOPIC generation 3 (__consumer_offsets-29) (kafka.coordinator.group.GroupCoordinator)
[2018-05-09 14:42:15,871] INFO [GroupCoordinator 1]: Assignment received from leader for group INPUT_TOPIC for generation 3 (kafka.coordinator.group.GroupCoordinator)
And from the second server:
[2018-05-09 14:42:16,228] INFO [TransactionCoordinator id=0] Initialized transactionalId INPUT_TOPIC-0_0 with producerId 2010 and producer epoch 37 on partition __transaction_state-37 (kafka.coordinator.transaction.TransactionCoordinator)
[2018-05-09 14:44:22,121] INFO [TransactionCoordinator id=0] Completed rollback ongoing transaction of transactionalId: INPUT_TOPIC-0_0 due to timeout (kafka.coordinator.transaction.TransactionCoordinator)
[2018-05-09 14:44:42,263] ERROR [ReplicaManager broker=0] Error processing append operation on partition OUTPUT_TOPIC-0 (kafka.server.ReplicaManager)
org.apache.kafka.common.errors.ProducerFencedException: Producer's epoch is no longer valid. There is probably another producer with a newer epoch. 37 (request epoch), 38 (server epoch)
It appears like the first server sees that the consumer has failed and removes it from the consumer group before it is registered with the second server. Any ideas what could be causing the consumer to fail? Or, any ideas handling this failure gracefully? It's possible that it is this bug, does anyone know of a possible workaround?
I'm not sure what caused the problem, but reducing the max.poll.records to 1 fixed the problem.

worker not recovered - Current config state offset 5 is behind group assignment 20, reading to end of config log

I am using a 3 nodes kafka connect cluster to write data from a source to to kafka topic and from topic to destination. Everything works fine in distributed mode but when one of the worker is stopped and then restarted then I am getting the below message.
[2017-09-13 23:48:44,519] WARN Catching up to assignment's config offset. (org.apache.kafka.connect.runtime.distributed.DistributedHerder:741)
[2017-09-13 23:48:44,519] INFO Current config state offset 5 is behind group assignment 20, reading to end of config log (org.apache.kafka.connect.runtime.distributed.DistributedHerder:785)
[2017-09-13 23:48:45,018] INFO Finished reading to end of log and updated config snapshot, new config log offset: 5 (org.apache.kafka.connect.runtime.distributed.DistributedHerder:789)
[2017-09-13 23:48:45,018] INFO Current config state offset 5 does not match group assignment 20. Forcing rebalance. (org.apache.kafka.connect.runtime.distributed.DistributedHerder:765)
[2017-09-13 23:48:45,018] INFO Rebalance started (org.apache.kafka.connect.runtime.distributed.DistributedHerder:1187)
[2017-09-13 23:48:45,018] INFO Wasn't unable to resume work after last rebalance, can skip stopping connectors and tasks (org.apache.kafka.connect.runtime.distributed.DistributedHerder:1219)
[2017-09-13 23:48:45,018] INFO (Re-)joining group connect-cluster (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:432)
[2017-09-13 23:48:45,023] INFO Successfully joined group connect-cluster with generation 38 (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:399)
[2017-09-13 23:48:45,023] INFO Joined group and got assignment: Assignment{error=0, leader='connect-1-e51c1e8b-c95a-406b-8c56-2a0d4fc432f6', leaderUrl='http://10.10.10.10:8083/', offset=20, connectorIds=[], taskIds=[oracle_jdbc_sink_test-0]} (org.apache.kafka.connect.runtime.distributed.DistributedHerder:1166)
[2017-09-13 23:48:45,023] WARN Catching up to assignment's config offset. (org.apache.kafka.connect.runtime.distributed.DistributedHerder:741)
[2017-09-13 23:48:45,023] INFO Current config state offset 5 is behind group assignment 20, reading to end of config log (org.apache.kafka.connect.runtime.distributed.DistributedHerder:785)
[2017-09-13 23:48:45,535] INFO Finished reading to end of log and updated config snapshot, new config log offset: 5 (org.apache.kafka.connect.runtime.distributed.DistributedHerder:789)
[2017-09-13 23:48:45,535] INFO Current config state offset 5 does not match group assignment 20. Forcing rebalance. (org.apache.kafka.connect.runtime.distributed.DistributedHerder:765)
[2017-09-13 23:48:45,535] INFO Rebalance started (org.apache.kafka.connect.runtime.distributed.DistributedHerder:1187)
[2017-09-13 23:48:45,535] INFO Wasn't unable to resume work after last rebalance, can skip stopping connectors and tasks (org.apache.kafka.connect.runtime.distributed.DistributedHerder:1219)
[2017-09-13 23:48:45,535] INFO (Re-)joining group connect-cluster (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:432)
[2017-09-13 23:48:45,540] INFO Successfully joined group connect-cluster with generation 38 (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:399)
[2017-09-13 23:48:45,540] INFO Joined group and got assignment: Assignment{error=0, leader='connect-1-e51c1e8b-c95a-406b-8c56-2a0d4fc432f6', leaderUrl='http://10.10.10.10:8083/', offset=20, connectorIds=[], taskIds=[oracle_jdbc_sink_test-0]} (org.apache.kafka.connect.runtime.distributed.DistributedHerder:1166)
[2017-09-13 23:48:45,540] WARN Catching up to assignment's config offset. (org.apache.kafka.connect.runtime.distributed.DistributedHerder:741)
[2017-09-13 23:48:45,540] INFO Current config state offset 5 is behind group assignment 20, reading to end of config log (org.apache.kafka.connect.runtime.distributed.DistributedHerder:785)
[2017-09-13 23:48:46,042] INFO Finished reading to end of log and updated config snapshot, new config log offset: 5 (org.apache.kafka.connect.runtime.distributed.DistributedHerder:789)
[2017-09-13 23:48:46,042] INFO Current config state offset 5 does not match group assignment 20. Forcing rebalance. (org.apache.kafka.connect.runtime.distributed.DistributedHerder:765)
[2017-09-13 23:48:46,042] INFO Rebalance started (org.apache.kafka.connect.runtime.distributed.DistributedHerder:1187)
[2017-09-13 23:48:46,042] INFO Wasn't unable to resume work after last rebalance, can skip stopping connectors and tasks (org.apache.kafka.connect.runtime.distributed.DistributedHerder:1219)
You can try resolving this issue by deleting and recreating config topics or changing the group Id.

kafka consumer keeps looping over a bunch of messages after CommitFailedException

I am running a multi threaded kafka 091 consumer [New].
They way i generate a client.id is using a combination of the "hostname the consumer is running on" + "AtomicInt" + "the PID of the process".
I am running into issues when I have to stop the consumer and restart. Consumer keeps trying to process the offsets that were not consumed by the previous run(about 100 of them). But it keeps failing with this message.
2016-10-21 14:22:55,293 [pool-3-thread-6] INFO o.a.k.c.c.i.AbstractCoordinator : Marking the coordinator 2147483647 dead.
2016-10-21 14:22:55,295 [pool-3-thread-6] ERROR o.a.k.c.c.i.ConsumerCoordinator : Error UNKNOWN_MEMBER_ID occurred while committing offsets for group x.cg
2016-10-21 14:22:55,296 [pool-3-thread-6] ERROR o.a.k.c.c.i.ConsumerCoordinator : Offset commit failed.
org.apache.kafka.clients.consumer.CommitFailedException: Commit cannot be completed due to group rebalance
at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator$OffsetCommitResponseHandler.handle(ConsumerCoordinator.java:552)
at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator$OffsetCommitResponseHandler.handle(ConsumerCoordinator.java:493)
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$CoordinatorResponseHandler.onSuccess(AbstractCoordinator.java:665)
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$CoordinatorResponseHandler.onSuccess(AbstractCoordinator.java:644)
at org.apache.kafka.clients.consumer.internals.RequestFuture$1.onSuccess(RequestFuture.java:167)
at org.apache.kafka.clients.consumer.internals.RequestFuture.fireSuccess(RequestFuture.java:133)
at org.apache.kafka.clients.consumer.internals.RequestFuture.complete(RequestFuture.java:107)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler.onComplete(ConsumerNetworkClient.java:380)
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:274)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.clientPoll(ConsumerNetworkClient.java:320)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:213)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:193)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.awaitMetadataUpdate(ConsumerNetworkClient.java:134)
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureCoordinatorKnown(AbstractCoordinator.java:184)
at org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:886)
at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:853)
at com.kfc.kafka.consumer.KFCConsumer$KafkaConsumerRunner.run(KFCConsumer.java:102)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
2016-10-21 14:22:55,397 [pool-3-thread-6] INFO o.a.k.c.c.i.AbstractCoordinator : Attempt to join group x.cg failed due to unknown member id, resetting and retrying.
.........
2016-10-21 14:22:58,124 [pool-3-thread-3] INFO o.a.k.c.c.i.AbstractCoordinator : Attempt to heart beat failed since the group is rebalancing, try to re-join group.
From the kakfa log, I see a lot of rebalances happening.
[2016-10-21 21:28:18,196] INFO [GroupCoordinator 1]: Stabilized group x.cg generation 1 (kafka.coordinator.GroupCoordinator)
[2016-10-21 21:28:18,196] INFO [GroupCoordinator 1]: Stabilized group x.cg generation 1 (kafka.coordinator.GroupCoordinator)
[2016-10-21 21:28:18,200] INFO [GroupCoordinator 1]: Assignment received from leader for group x.cg for generation 1 (kafka.coordinator.GroupCoordinator)
[2016-10-21 21:28:18,200] INFO [GroupCoordinator 1]: Assignment received from leader for group x.cg for generation 1 (kafka.coordinator.GroupCoordinator)
[2016-10-21 21:28:18,952] INFO [GroupCoordinator 1]: Preparing to restabilize group x.cg with old generation 1 (kafka.coordinator.GroupCoordinator)
[2016-10-21 21:28:18,952] INFO [GroupCoordinator 1]: Preparing to restabilize group x.cg with old generation 1 (kafka.coordinator.GroupCoordinator)
[2016-10-21 21:28:48,233] INFO [GroupCoordinator 1]: Stabilized group x.cg generation 2 (kafka.coordinator.GroupCoordinator)
[2016-10-21 21:28:48,233] INFO [GroupCoordinator 1]: Stabilized group x.cg generation 2 (kafka.coordinator.GroupCoordinator)
[2016-10-21 21:28:48,243] INFO [GroupCoordinator 1]: Assignment received from leader for group x.cg for generation 2 (kafka.coordinator.GroupCoordinator)
[2016-10-21 21:28:48,243] INFO [GroupCoordinator 1]: Assignment received from leader for group x.cg for generation 2 (kafka.coordinator.GroupCoordin
Turns out we were having long, recurring pauses [slow network, problems with external components etc.] w.r.t the external components that our consumer was interacting with.
Solution was to split our consumer into three consumer with different consumer group and Kafka config's (heartbeatinterval.ms, session.timeout.ms, request.timeout.ms, maxPartitionFetchBytes).
Having 3 different consumers with custom config for the above mentioned properties helped us get rid of the above problem.
The general thinking is not to have a lot of external communication within the consumer as this increases the uncertainty in Kafka consumer behavior and when you do have external communication make sure the Kafka Consumer Config's are inline with the SLA's of the external components.