kafak brokers are not rechable from the memsql ops in windows machine? - apache-kafka

While i'm connecting memsql to kafka using Docker... I'm unable to fetch the data that i'm pushed into kafka topic....
I'm getting error like
ERROR 1933 ER_EXTRACTOR_EXTRACTOR_GET_LATEST_OFFSETS: Cannot get source metadata for pipeline. Could not fetch Kafka metadata (are the Kafka brokers reachable from the Master Aggregator?)

Related

Mongodb kafka connector is running but the data is not getting published in sink cluster

I was using mongodb kafka connector on confluent cloud and the data source and sink cluster was in mongodb. Although the connectors on confluent cloud is running, the source connector on confluent shows the spike as well as count of message processed each time when the data is being inserted in source cluster, the data is not getting published in the sink cluster (source and sink cluster belongs to two different account of mongodb).. Can somebody tell me why is it not able to transmit the data.
As both the connectors are connected successfully and they are up and running ,I was expecting that data which is being added to mongodb source cluster, it should get reflected in sink cluster.

Using Amazon MSK and Debezium SQL Server Connector. Error while fetching metadata with correlation id 7 : {TestKafkaDB= UNKNOWN_TOPIC_OR_PARTITION}

I was trying to connect my RDS MS SQL server with Debezium SQL Server Connector to stream changes to Kafka Cluster on Amazon MSK.
I configured connector and Kafka Connect worker well run the Connect by
bin/connect-standalone.sh ../worker.properties connect/dbzmmssql.properties
Got WARN [Producer clientId=producer-1] Error while fetching metadata with correlation id 10 : {TestKafkaDB=UNKNOWN_TOPIC_OR_PARTITION} (org.apache.kafka.clients.NetworkClient:1031)
I've solved this problem and just want to share my possible solution with other fresher with Kafka.
TestKafkaDB=UNKNOWN_TOPIC_OR_PARTITION basically means the connector didn't find the a usable topic in Kafka broker. The reason I am facing this is the Kafka broker didn't automatically create a new topic for the stream.
To solving this, I changed Cluster Configuration in AWS MSK console, change auto.create.topics.enable from default false to true and update this configuration to the Cluster, then my problem solved.

How to make a Data Pipeline from MQTT to KAFKA Broker to MongoDB?

How can I make a data pipeline, I am sending data from MQTT to KAFKA topic using Source Connector. and on the other side, I have also connected Kafka Broker to MongoDB using Sink Connector. I am having trouble making a data pipeline that goes from MQTT to KAFKA and then MongoDB. Both connectors are working properly individually. How can I integrate them?
here is my MQTT Connector
MQTT Connector
Node 1 MQTT Connector
Message Published from MQTT
Kafka Consumer
Node 2 MongoDB Connector
MongoDB
that is my MongoDB Connector
MongoDB Connector
It is hard to tell what exactly the problem is without more logs, please provide your connect.config as well, please check /status of your connector, I still did not understand exactly what the issue you are facing, you are saying that , MQTT SOURCE CONNECTOR sending messages successfully to KAFKA TOPIC and your MONGO DB SINK CONNECTOR successfully reading this KAFKA TOPIC and write to your mobgodb, hence your pipeline, Where is the error? Is your KAFKA is the same KAFKA? Or separated different KAFKA CLUSTERS? Seems like both localhost, but is it the same machine?
Please elaborate and explain what are you expecting? What does "pipeline" means in your word?
You need both connectors to share same kafka cluster, what does node1 and node2 mean is it seperate kafka instance? Your connector need to connect to the same kafka "node" / cluster in order to share the data inside the kafka topic one for input and one for output, share your bootstrap service parameters, share your server.properties as well of the kafka
In order to run two different connect clusters inside same kafka , you need to set in different internal topics for each connect cluster
config.storage.topic
offset.storage.topic
status.storage.topic

KSQL Server connects to Remote Kafka Cluster

I'm using a local KSQL Server (latest version of Confluent Platform), trying to connect to a Tunneled Kafka cluster on my server (AWS). However, I'm facing an issue of cannot start KSQL server because of the below error:
ERROR Failed to initialize TopicClient: Unexpected broker id, expected 5 or empty string, but received ConfigResource(type=BROKER, name='1').name (io.confluent.ksql.services.KafkaTopicClient:279)
ERROR Failed to start KSQL (io.confluent.ksql.rest.server.KsqlServerMain:53)
io.confluent.ksql.util.KsqlException: Could not fetch broker information. KSQL cannot initialize
at
io.confluent.ksql.services.KafkaTopicClientImpl.isTopicDeleteEnabled (KafkaTopicClientImpl.java:280)
at io.confluent.ksql.services.KafkaTopicClientImpl.<init>(KafkaTopicClientImpl.java:66)
at io.confluent.ksql.services.DefaultServiceContext.create(DefaultServiceContext.java:43)
at io.confluent.ksql.rest.server.KsqlRestApplication.buildApplication(KsqlRestApplication.java:303)
at io.confluent.ksql.rest.server.KsqlServerMain.createExecutable(KsqlServerMain.java:85)
at io.confluent.ksql.rest.server.KsqlServerMain.main(KsqlServerMain.java:50)
Caused by: org.apache.kafka.common.errors.InvalidRequestException: Unexpected broker id, expected 5 or empty string, but received ConfigResource(type=BROKER, name='1').name
My tunneled kafka cluster is running on the version 2.0.0. Even I tried to use the previous version of KSQL (5.0.x) that's compatible with the Kafka version 2.0.x. However the issue still happen.

How to use Kafka connect to transmit data to Kafka broker in another machine?

I'm trying to use Kafka connect in Confluent platform 3.2.1 and everything works fine in my local env. Then I encountered this problem when I try to use Kafka source connector to send data to another machine.
I deploy Kafka JDBC source connector in machine A and trying to capture database A. Then I deploy a Kafka borker B(along with zk, schema registry) in machine B. The source connector cannot send data to broker B and throws the following exception:
[2017-05-19 16:37:22,709] ERROR Failed to commit offsets for WorkerSourceTask{id=test-multi-0} (org.apache.kafka.connect.runtime.SourceTaskOffsetCommitter:112)
[2017-05-19 16:38:27,711] ERROR Failed to flush WorkerSourceTask{id=test-multi-0}, timed out while waiting for producer to flush outstanding 3 messages (org.apache.kafka.connect.runtime.WorkerSourceTask:304)
I tried config the server.properties in broker B like this:
listeners=PLAINTEXT://:9092
and leave the advertised.listeners setting commented.
Then I use
bootstrap.servers=192.168.19.234:9092
in my source connector where 192.168.19.234 is the IP of machine B. Machine A and B are in the same subnet.
I suspect this has something to do with my server.properties.
How should I config to get the things done? Thanks in advance.