Why is my MSK connector in a failed state? - apache-kafka

I'm using AWS MSK and trying to create a Kafka connector using the confluent Kafka SQS source connector.
Having uploaded the SQS source connector plugin (in zip form) I go to the MSK console and try to create the connector, specifying my existing MSK cluster and choosing my plugin.
After some time, the following error message appears:
There is an issue with the connector.
Code: UnknownError.Unknown
Message: The last operation failed. Retry the operation.
Connector is in Failed state
The rather useless error message means I don't know where to look next.
I tried the CloudWatch logs, but although my cluster is configured to send logs there, there is nothing related to this error.

Related

Unable to start debezium connector in distributed mode

Trying to deploy debezium using kafka connect distributed mode is causing issues .
The connect worker shut down with not a clear exception.
The group-id decleration along with the topic.regex is not forcing the connect to read only the regex topics it is trying to consume from all the topics in the cluster
Has anyone able to run debezium on connect-distributed?
Followed the instructions to run debezium using connect-distributed.

Mongodb kafka connector is running but the data is not getting published in sink cluster

I was using mongodb kafka connector on confluent cloud and the data source and sink cluster was in mongodb. Although the connectors on confluent cloud is running, the source connector on confluent shows the spike as well as count of message processed each time when the data is being inserted in source cluster, the data is not getting published in the sink cluster (source and sink cluster belongs to two different account of mongodb).. Can somebody tell me why is it not able to transmit the data.
As both the connectors are connected successfully and they are up and running ,I was expecting that data which is being added to mongodb source cluster, it should get reflected in sink cluster.

Kafka connect-distributed mode fault tolerance not working

I have created kafka connect cluster with 3 EC2 machines and started 3 connectors ( debezium-postgres source) on each machine reading a different set of tables from postgres source. In one of the machines, I started the s3 sink connector as well. So the changed data from postgres is being moved to kafka broker via source connectors (3) and S3 sink connector consumes these messages and pushes them to S3 bucket.
The cluster is working fine and so are the connectors too. When I pause any of the connectors running on one of EC2 machine, I was expecting that its task should be taken by another connector (postgres-debezium) running on another machine. But that's not happening.
I installed kafdrop as well to monitor the brokers. I see 3 internal topics connect-offsets, connect-status and connect-configs are getting populated with necessary offsets, configs, and status too ( when I pause, status paus message appears).
But somehow connectors are not taking the task when I paused.
Let me know in what scenario connector takes the task of other failed one? Is pause is the right way? or we should produce some error on one of the connectors then it takes.
Please guide.
Sounds like it's working as expected.
Pausing has nothing to do with the fault tolerance settings and it'll completely stop the tasks. There's nothing to rebalance until unpaused.
The fault tolerance settings for dead letter queue, skip+log, or halt are for when there are actual runtime exception in the connector that you cannot control through the API. For example, a database or S3 network / authentication exception, or serialization error in the Kafka client

Confluent Cloud Kafka - Audit Log Cluster : Sink Connector

For Kafka cluster hosted in Confluent Cloud, there is an Audit Log cluster that gets created. It seems to be possible to hook a Sink connector to this cluster and drain the events out from "confluent-audit-log-events" topic.
However, I am running into the below error when I run the connector to do the same.
org.apache.kafka.common.errors.TopicAuthorizationException: Not authorized to access topics: [connect-offsets]
In my connect-distributed.properties file, I have the settings as :
offset.storage.topic=connect-offsets
offset.storage.replication.factor=3
offset.storage.partitions=3
What extra permission/s needs to be granted so that the connector can create the required topics in the cluster? The key/secret being used in the connect-distributed.properties files is a valid key/secret that is associated to the service account for this cluster.
Also, when I run the consumer in the console using the same key (as above) , I am able to read the audit log events just fine.
It's confirmed that this feature (hooking up a connector to the Audit Log cluster) is not supported at the moment in Confluent Cloud. This feature may be available later this year at some point.

How to use Kafka connect to transmit data to Kafka broker in another machine?

I'm trying to use Kafka connect in Confluent platform 3.2.1 and everything works fine in my local env. Then I encountered this problem when I try to use Kafka source connector to send data to another machine.
I deploy Kafka JDBC source connector in machine A and trying to capture database A. Then I deploy a Kafka borker B(along with zk, schema registry) in machine B. The source connector cannot send data to broker B and throws the following exception:
[2017-05-19 16:37:22,709] ERROR Failed to commit offsets for WorkerSourceTask{id=test-multi-0} (org.apache.kafka.connect.runtime.SourceTaskOffsetCommitter:112)
[2017-05-19 16:38:27,711] ERROR Failed to flush WorkerSourceTask{id=test-multi-0}, timed out while waiting for producer to flush outstanding 3 messages (org.apache.kafka.connect.runtime.WorkerSourceTask:304)
I tried config the server.properties in broker B like this:
listeners=PLAINTEXT://:9092
and leave the advertised.listeners setting commented.
Then I use
bootstrap.servers=192.168.19.234:9092
in my source connector where 192.168.19.234 is the IP of machine B. Machine A and B are in the same subnet.
I suspect this has something to do with my server.properties.
How should I config to get the things done? Thanks in advance.