Apache Nifi PublishKafka Timeout Exception - apache-kafka

I want to publish my JSON data to PublishKafka Processor in Apache Nifi
Processor : PublishKafka_2_0
Apache Nifi Version : 1.15.3
Kafka Version : kafka_2.13-3.1.0
Here is my configuration settings :
My kafka server is live and I can produce the "mytopic" topic from the console.
I get this error.What am i missing ?.What should ı do ?

Related

Kafka in Talend 8.0.1

I started the Zookeeper and Kafka service which are on ubuntu2004. I also connect myself to my broker in Talend and create topics.
The problem is, using this same KafkaConnection, I am not able to send a message in bytes.
Here the flow in question :
With this job, Like I said, I succeed to connect to the broker to create a topics but not to send a message in bytes to my topic.
Here is the flow after I click on run :
And the error message :
[INFO ] 17:11:41 org.apache.kafka.common.utils.AppInfoParser- Kafka version : 1.1.0
[INFO ] 17:11:41 org.apache.kafka.common.utils.AppInfoParser- Kafka commitId : fdcf75ea326b8e07
[INFO ] 17:11:41 sandbox.kafkatopic_0_1.KafkaTopic- tFileInputDelimited_1 - Retrieving records from the datasource.
[INFO ] 17:11:41 sandbox.kafkatopic_0_1.KafkaTopic- tLogRow_2 - Content of row 1: test d'envois de message dans kafka
[INFO ] 17:11:41 sandbox.kafkatopic_0_1.KafkaTopic- tLogRow_1 - Content of row 1: test d'envois de message dans kafka
[WARN ] 17:11:43 org.apache.kafka.clients.NetworkClient- [Producer clientId=producer-1] Connection to node -1 could not be established. Broker may not be available.
I use this Kafka version : kafka_2.13-3-2-1
For the records, in the KafkaConnection, I select the version 1.1.0 because with the newest version of kafka in this comp, I didn't even succeed to create a topic :
On a second time I tried to implement a SSL/TLS security. I am having issues with this too.

Kafka connect MongoDB sink connector using kafka-avro-console-producer

I'm trying to write some documents to MongoDB using the Kafka connect MongoDB connector. I've managed to set up all the components required and start up the connector but when I send the message to Kafka using the kafka-avro-console-producer, Kafka connect is giving me the following error:
org.apache.kafka.connect.errors.DataException: Error: `operationType` field is doc is missing.
I've tried to add this field to the message but then kafka connect is asking me to include a documentKey field. It seems like I need to include some extra fields apart from the payload defined in my schema but I can't find a comprehensive documentation. Does anyone have an example of a kafka message payload (using kafka-avro-console-producer) that goes through a Kafka -> Kafka connect -> MongoDB pipeline?
See following an example of one of the messages I'm sending to Kafka (btw, kafka-avro-console-consumer is able to consume the messages):
./kafka-avro-console-producer --broker-list kafka:9093 --topic sampledata --property value.schema='{"type":"record","name":"myrecord","fields":[{"name":"field1","type":"string"}]}'
{"field1": "value1"}
And see also following the configuration of the sink connector:
{"name": "mongo-sink",
"config": {
"connector.class":"com.mongodb.kafka.connect.MongoSinkConnector",
"value.converter":"io.confluent.connect.avro.AvroConverter", "value.converter.schema.registry.url":"http://schemaregistry:8081",
"connection.uri":"mongodb://cadb:27017",
"database":"cognitive_assistant",
"collection":"topicData",
"topics":"sampledata6",
"change.data.capture.handler": "com.mongodb.kafka.connect.sink.cdc.mongodb.ChangeStreamHandler"
}
}
I've just managed to make the connector work. I deleted the change.data.capture.handler property from the connector configuration and it works now.

Messages are not getting consumed

I have added the below configuration in application.properties file of Spring Boot with Camel implementation but the messages are not getting consumed. Am I missing any configuration? Any pointers to implement consumer from Azure event hub using kafka protocol and Camel ?
bootstrap.servers=NAMESPACENAME.servicebus.windows.net:9093
security.protocol=SASL_SSL
sasl.mechanism=PLAIN
sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username="$ConnectionString" password="{YOUR.EVENTHUBS.CONNECTION.STRING}";
The route looks like this:
from("kafka:{{topicName}}?brokers=NAMESPACENAME.servicebus.windows.net:9093" )
.log("Message received from Kafka : ${body}");
I found the solution for this issue. Since I was using the Spring Boot Auto configuration (camel-kafka-starter), the entry on the application.properties file had to be modified as given below:
camel.component.kafka.brokers=NAMESPACENAME.servicebus.windows.net:9093
camel.component.kafka.security-protocol=SASL_SSL
camel.component.kafka.sasl-mechanism=PLAIN
camel.component.kafka.sasl-jaas-config =org.apache.kafka.common.security.plain.PlainLoginModule required username="$ConnectionString" password="{YOUR.EVENTHUBS.CONNECTION.STRING}";
The consumer route for the Azure event hub with Kafka protocol will look like this:
from("kafka:{{topicName}}")
.log("Message received from Kafka : ${body}");
Hope this solution helps to consume events from Azure event hub in Camel using Kafka protocol

Flink Kafka connector to eventhub

I am using Apache Flink, and trying to connect to Azure eventhub by using Apache Kafka protocol to receive messages from it. I manage to connect to Azure eventhub and receive messages, but I can't use flink feature "setStartFromTimestamp(...)" as described here (https://ci.apache.org/projects/flink/flink-docs-stable/dev/connectors/kafka.html#kafka-consumers-start-position-configuration).
When I am trying to get some messages from timestamp, Kafka said that the message format on the broker side is before 0.10.0.
Is anybody faced with this?
Apache Kafka client version is 2.0.1
Apache Flink version is 1.7.2
UPDATED: tried to use Azure-Event-Hub quickstart examples (https://github.com/Azure/azure-event-hubs-for-kafka/tree/master/quickstart/java) in consumer package added code to get offset with timestamp, it returns null as expected if message version under 0.10.0 kafka version.
List<PartitionInfo> partitionInfos = consumer.partitionsFor(TOPIC);
List<TopicPartition> topicPartitions = partitionInfos.stream().map(pi -> new TopicPartition(pi.topic(), pi.partition())).collect(Collectors.toList());
Map<TopicPartition, Long> topicPartitionToTimestampMap = topicPartitions.stream().collect(Collectors.toMap(tp -> tp, tp -> 0L));
Map<TopicPartition, OffsetAndTimestamp> offsetAndTimestamp = consumer.offsetsForTimes(topicPartitionToTimestampMap);
System.out.println(offsetAndTimestamp);
Sorry we missed this. Kafka offsetsForTimes() is now supported in EH (previously unsupported).
Feel free to open an issue against our Github in the future. https://github.com/Azure/azure-event-hubs-for-kafka

LogStash Kafka output/input is not working

I try to use logstash with kafka broker. It can not integrated together.
Each version are,
logstash 2.4
kafka 0.8.2.2 with scala 2.10
The kafka input config file is:
input {
stdin{
}
}
output{
stdout {
codec => rubydebug
}
kafka {
bootstrap_servers => 10.120.16.202:6667,10.120.16.203:6667,10.120.16.204:6667'
topic_id => "cephosd1"
}
}
I can list topic cephosd1 from kafka.
The stdout could print out content, also.
But I can not read anything from kafka-console-consumer.sh .
I think you have a compatibility issue. If you check the version compatibility matrix between Logstash, Kafka and the kafka output plugin, you'll see that the kafka output plugin in Logstash 2.4 uses the Kafka 0.9 client version.
If you have a Kafka broker 0.8.2.2, it is not compatible with the client version 0.9 (the other way around would be ok). You can either downgrade to Logstash 2.0 or upgrade your Kafka broker to 0.9.