Kafka in Talend 8.0.1 - apache-kafka

I started the Zookeeper and Kafka service which are on ubuntu2004. I also connect myself to my broker in Talend and create topics.
The problem is, using this same KafkaConnection, I am not able to send a message in bytes.
Here the flow in question :
With this job, Like I said, I succeed to connect to the broker to create a topics but not to send a message in bytes to my topic.
Here is the flow after I click on run :
And the error message :
[INFO ] 17:11:41 org.apache.kafka.common.utils.AppInfoParser- Kafka version : 1.1.0
[INFO ] 17:11:41 org.apache.kafka.common.utils.AppInfoParser- Kafka commitId : fdcf75ea326b8e07
[INFO ] 17:11:41 sandbox.kafkatopic_0_1.KafkaTopic- tFileInputDelimited_1 - Retrieving records from the datasource.
[INFO ] 17:11:41 sandbox.kafkatopic_0_1.KafkaTopic- tLogRow_2 - Content of row 1: test d'envois de message dans kafka
[INFO ] 17:11:41 sandbox.kafkatopic_0_1.KafkaTopic- tLogRow_1 - Content of row 1: test d'envois de message dans kafka
[WARN ] 17:11:43 org.apache.kafka.clients.NetworkClient- [Producer clientId=producer-1] Connection to node -1 could not be established. Broker may not be available.
I use this Kafka version : kafka_2.13-3-2-1
For the records, in the KafkaConnection, I select the version 1.1.0 because with the newest version of kafka in this comp, I didn't even succeed to create a topic :
On a second time I tried to implement a SSL/TLS security. I am having issues with this too.

Related

Apache Nifi PublishKafka Timeout Exception

I want to publish my JSON data to PublishKafka Processor in Apache Nifi
Processor : PublishKafka_2_0
Apache Nifi Version : 1.15.3
Kafka Version : kafka_2.13-3.1.0
Here is my configuration settings :
My kafka server is live and I can produce the "mytopic" topic from the console.
I get this error.What am i missing ?.What should ı do ?

Kafka Admin client unregistered causing metadata issues

After migrating our microservice functionality to Spring Cloud function we have been facing issues with one of the producer topics.
Event of type: abc and key: xxx_yyy could not be sent to kafka org.springframework.messaging.MessageHandlingException: error occurred in message handler [org.springframework.cloud.stream.binder.kafka.KafkaMessageChannelBinder$ProducerConfigurationMessageHandler#2333d598]; nested exception is org.springframework.kafka.KafkaException: Send failed; nested exception is org.apache.kafka.common.errors.TimeoutException: Topic pc-abc not present in metadata after 60000 ms.
o.s.kafka.support.LoggingProducerListener - Exception thrown when sending a message with key='byte[15]' and payload='byte[256]' to topic pc-abc and partition 6: org.apache.kafka.common.errors.TimeoutException: Topic pc-abc not present in metadata after 60000 ms.
FYI: Topics are already created in our staging/prod environment and are not to be created as the application starts.
My producer config:
spring.cloud.stream.bindings.pc-abc-out-0.content-type=application/json
spring.cloud.stream.bindings.pc-abc-out-0.destination=pc-abc
spring.cloud.stream.bindings.pc-abc-out-0.producer.header-mode=headers
***spring.cloud.stream.bindings.pc-abc-out-0.producer.partition-count=5***
spring.cloud.stream.bindings.pc-abc-out-0.producer.partitionKeyExpression=payload.key
spring.cloud.stream.kafka.bindings.pc-abc-out-0.producer.sync=true
I am kind of stuck at this point and exhausted. Has anyone else faced this issue?
Spring Cloud version: 2.5.5
Kafka: 2.7.1
The issue is :
The producer is configured with partition-count=5
and Kafka is looking for partition number 6 , which obviously does not exist , I have commented the auto-add partitions property, but the issue still turns up !! Is it stale configuration? How do I force kafka to take up new configuration.

Connection terminates between Mule 4 and Confluent Cloud with Apache Kafka Connector 4.5.0 but connects with 3.0.7

Setting up a (very simple) POC with Mule 4 and Confluent Cloud:
I have been unable to establish a successful connection using the latest version of the Mule 4 Apache Kafka Connector (4.5.0). If I downgrade it to 3.0.7 and use the same configuration it works fine. Why is this?
The working 3.0.7 configuration (for a basic producer) looks like this:
<kafka:kafka-producer-config name="Apache_Kafka_Producer_configuration" doc:name="Apache Kafka Producer configuration" doc:id="2ba6262d-2ff8-4282-910e-5c9e3d347d50" >
<kafka:basic-kafka-producer-connection bootstrapServers="${kafka.bootstrapserver}" >
<kafka:additional-properties >
<kafka:additional-property key="sasl.jaas.config" value="org.apache.kafka.common.security.plain.PlainLoginModule required username='${kafka.key}' password='${kafka.secret}';" />
<kafka:additional-property key="ssl.endpoint.identification.algorithm" value="https" />
<kafka:additional-property key="security.protocol" value="SASL_SSL" />
<kafka:additional-property key="sasl.mechanism" value="PLAIN" />
<kafka:additional-property key="serviceName" value="kafka" />
</kafka:additional-properties>
</kafka:basic-kafka-producer-connection>
</kafka:kafka-producer-config>
And the failing 4.5.0 configuration (also for a basic producer) looks like this:
<kafka:producer-config name="Apache_Kafka_Producer_configuration" doc:name="Apache Kafka Producer configuration" doc:id="7aa22dcc-7895-4254-ba51-e8bc5e2e9c2e" >
<kafka:producer-sasl-plain-connection username="${kafka.key}" password="${kafka.secret}" endpointIdentificationAlgorithm="https">
<kafka:bootstrap-servers >
<kafka:bootstrap-server value="${kafka.bootstrapserver}" />
</kafka:bootstrap-servers>
</kafka:producer-sasl-plain-connection>
</kafka:producer-config>
You can see that they both:
Use an SASL plain text connection
Have an SSL endpoint identification algorithm of HTTPS
Specify the same bootstrap server, API key, and secret
There is very little else in the flow other than an HTTP listener and a Set Payload.
Messages sent using the earlier connector version arrive on the Confluent Cloud topic fine, however using the application fails to start and recursively prints errors such as:
org.apache.kafka.common.security.authenticator.SaslClientAuthenticator: [Producer clientId=producer-1] Set SASL client state to RECEIVE_APIVERSIONS_RESPONSE
org.apache.kafka.clients.NetworkClient: [Producer clientId=producer-1] Completed connection to node -1. Fetching API versions.
org.apache.kafka.clients.NetworkClient: [Producer clientId=producer-1] Found least loaded connecting node pkc-4vndj.australia-southeast1.gcp.confluent.cloud:9092 (id: -1 rack: null)
org.mule.runtime.module.extension.internal.runtime.config.LifecycleAwareConfigurationInstance.testConnectivity:179 #23ad5b4f] [processor: ; event: ] org.apache.kafka.clients.NetworkClient: [Consumer clientId=consumer-connectivity-1, groupId=connectivity] Node -1 disconnected.
org.mule.runtime.module.extension.internal.runtime.config.LifecycleAwareConfigurationInstance.testConnectivity:179 #23ad5b4f] [processor: ; event: ] org.apache.kafka.clients.NetworkClient: [Consumer clientId=consumer-connectivity-1, groupId=connectivity] Connection to node -1 (xxxx.australia-southeast1.gcp.confluent.cloud/35.244.90.132:9092) terminated during authentication. This may happen due to any of the following reasons: (1) Authentication failed due to invalid credentials with brokers older than 1.0.0, (2) Firewall blocking Kafka TLS traffic (eg it may only allow HTTPS traffic), (3) Transient network issue.
org.apache.kafka.clients.NetworkClient: [Consumer clientId=consumer-connectivity-1, groupId=connectivity] Bootstrap broker pkc-4vndj.australia-southeast1.gcp.confluent.cloud:9092 (id: -1 rack: null) disconnected
org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient: [Consumer clientId=consumer-connectivity-1, groupId=connectivity] Cancelled request with header RequestHeader(apiKey=METADATA, apiVersion=9, clientId=consumer-connectivity-1, correlationId=17) due to node -1 being disconnected
org.apache.kafka.common.network.Selector: [Producer clientId=producer-1] Connection with xxxxx.australia-southeast1.gcp.confluent.cloud/35.244.90.132 disconnected
And stacktrace with End of File Exception:
java.io.EOFException: null
at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:120) ~[kafka-clients-2.7.0.jar:?]
at org.apache.kafka.common.security.authenticator.SaslClientAuthenticator.receiveResponseOrToken(SaslClientAuthenticator.java:470) ~[kafka-clients-2.7.0.jar:?]
at org.apache.kafka.common.security.authenticator.SaslClientAuthenticator.receiveKafkaResponse(SaslClientAuthenticator.java:560) ~[kafka-clients-2.7.0.jar:?]
at org.apache.kafka.common.security.authenticator.SaslClientAuthenticator.authenticate(SaslClientAuthenticator.java:248) ~[kafka-clients-2.7.0.jar:?]
at org.apache.kafka.common.network.KafkaChannel.prepare(KafkaChannel.java:176) ~[kafka-clients-2.7.0.jar:?]
Which (looking at the Apache source code) looks like a zero-byte message response.
Version 4.5.0 may not be constructing and instantiating a proper org.apache.kafka.common.security.plain.PlainLoginModule that is required for Confluent Cloud to authenticate requests.

Kafka SSL Not streaming data to SSL Druid

I am new to druid and trying to do kafka(SSL) ingestion to SSL enabled druid. Druid is running on https.
Kafka Version : 2.2.2
Druid Version : 0.18.1
Kafka SSL works and I can assure it using the producer and consumer scripts :
bin/kafka-console-producer.sh --broker-list kafka01:9093 --topic testssl --producer.config config/client.properties
bin/kafka-console-consumer.sh --bootstrap-server kafka01:9093 --topic testssl config/client.properties --from-beginning
The above thing works. So I can assure that kafka SSL is setup.
Druid SSL Configuration :
druid.enablePlaintextPort=false
druid.enableTlsPort=true
druid.server.https.keyStoreType=jks
druid.server.https.keyStorePath=.jks
druid.server.https.keyStorePassword=
druid.server.https.certAlias=
druid.client.https.protocol=TLSv1.2
druid.client.https.trustStoreType=jks
druid.client.https.trustStorePath=.jks
druid.client.https.trustStorePassword=
Kafka SSL configuration :
ssl.truststore.location=<location>.jks --- The same is used for druid also
ssl.truststore.password=<password>
ssl.keystore.location=<location>.jks --- The same is used for druid also
ssl.keystore.password=<password>
ssl.key.password=<password>
ssl.enabled.protocols=TLSv1.2
ssl.client.auth=none
ssl.endpoint.identification.algorithm=
security.protocol=SSL
My consumerProperties spec looks like this :
"consumerProperties": {
"bootstrap.servers" : "kafka01:9093",
"security.protocol": "SSL",
"ssl.enabled.protocols" : "TLSv1.2",
"ssl.endpoint.identification.algorithm": "",
"group.id" : "<grouop_name>",
"ssl.keystore.type": "JKS",
"ssl.keystore.location" : "/datadrive/<location>.jks",
"ssl.keystore.password" : "<password>",
"ssl.key.password" : "<password>",
"ssl.truststore.location" : "/datadrive/<location>.jks",
"ssl.truststore.password" : "<password>",
"ssl.truststore.type": "JKS"
}
After ingestion, the datasource gets created and the segments also get created but with 0 rows.
And after sometime I am continuously getting in the druid logs:
[task-runner-0-priority-0] org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=consumer-1, groupId=kafka-supervisor-llhigfpg] Sending READ_COMMITTED IncrementalFetchRequest(toSend=(), toForget=(), implied=(testssl-0)) to broker kafka01:9093 (id: 0 rack: null)
And after sometimes in coordinator-overlord.log I am getting :
2020-08-03T16:51:42,881 DEBUG [JettyScheduler] org.eclipse.jetty.io.WriteFlusher - ignored: WriteFlusher#278a176a{IDLE}->null
java.util.concurrent.TimeoutException: Idle timeout expired: 300001/300000 ms
I am not sure what has gone wrong. I could not find much on the net for this issue. Need help on this.
NOTE : When druid is non-https and kafka is not ssl enabled, everything works fine.

Kafka - Error while fetching metadata with correlation id - LEADER_NOT_AVAILABLE

I have setup Kafka cluster locally. Three broker's with properties :
broker.id=0
listeners=PLAINTEXT://:9092
broker.id=1
listeners=PLAINTEXT://:9091
broker.id=2
listeners=PLAINTEXT://:9090
Things were working fine but I am now getting error :
WARN Error while fetching metadata with correlation id 1 : {TRAIL_TOPIC=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.NetworkClient)
I am also trying to write messages via Java based client & I am getting error : unable to fetch metadata in 6000ms.
I faced the same problem, it is because the topic does not exist and the configuration of broker auto.create.topics.enable by default is set to false. I was using bin/connect-standalone so I didn't specify topics I would use.
I changed this config to true and it solved my problem.